summaryrefslogtreecommitdiff
path: root/shared-core/nouveau_fifo.c
AgeCommit message (Collapse)Author
2008-03-07nouveau: redo channel idle detectionBen Skeggs
Will hopefully work a bit better than previous code, which depended on knowing the channel's most recent PUT value. Some chips always return 0 on reading these regs, and currently userspace is the only other entity which knows the value.
2008-03-07nouveau: don't touch NV_USER regs on channel destroy.Ben Skeggs
Not only was this entirely pointless, it actually causes my NV30GL to die randomly when channels are destroyed.
2008-02-22nouveau: Remove some random (french) comment.Maarten Maathuis
2008-01-30nv40: some more nv67 changesBen Skeggs
With some luck the drm-side will be OK now for this chipset.
2007-11-14nouveau: Also wait until CACHE1 gets emptied.Ben Skeggs
2007-11-14Revert "nouveau: stub superioctl"Ben Skeggs
This reverts commit 2370ded79b4176d76cda1ec5f495fd33c2d566ed. Err.. didn't mean for that to slip in :)
2007-11-14Merge branch 'fifo-cleanup' into upstream-masterBen Skeggs
2007-11-14nouveau: Attempt to wait for channel idle before we destroy it.Ben Skeggs
2007-11-14nouveau: Use "new" NV40 USER control regs.Ben Skeggs
Probably entirely pointless, but a simple change in any case.
2007-11-14nouveau: store user control reg offsets in channel structBen Skeggs
2007-11-14nouveau: funcs to determine active channel on PFIFO.Ben Skeggs
2007-11-14nouveau: stub superioctlBen Skeggs
2007-11-05drm: remove lots of spurious whitespace.Dave Airlie
Kernel "cleanfile" script run.
2007-09-29nouveau : stop the fifo of the channel we are deletingMatthieu Castet
2007-08-31nouveau: give nv03 the last cut.Stephane Marchesin
2007-08-21nouveau: Poke 0x2230 on NV47 also.Ben Skeggs
Makes 0x2220 work the same way as on NV40.
2007-08-10nouveau/nv50: demagic instmem setup.Ben Skeggs
2007-08-08nouveau: Always allocate drm's push buffer in VRAMBen Skeggs
Fixes #11868
2007-08-08nouveau: enable/disable engine-specific interrupts in _init()/_takedown()Ben Skeggs
All interrupts are still masked by PMC until init is finished.
2007-08-06nouveau: Remove PGRAPH_SURFACE hack, it wont work now anyway.Ben Skeggs
Need to find another way of doing this, ideally someone'd hunt down which object/method controls it! The Xv blit adaptor is likely now broken on cards that have pNv->WaitVSyncPossible enabled.
2007-08-06nouveau: Give DRM its own gpu channelBen Skeggs
If your card doesn't have working context switching, it is now broken.
2007-08-06nouveau: Various internal and external API changesBen Skeggs
1. DRM_NOUVEAU_GPUOBJ_FREE Used to free GPU objects. The obvious usage case is for Gr objects, but notifiers can also be destroyed in the same way. GPU objects gain a destructor method and private data fields with this change, so other specialised cases (like notifiers) can be implemented on top of gpuobjs. 2. DRM_NOUVEAU_CHANNEL_FREE 3. DRM_NOUVEAU_CARD_INIT Ideally we'd do init during module load, but this isn't currently possible. Doing init during firstopen() is bad as X has a love of opening/closing the DRM many times during startup. Once the modesetting-101 branch is merged this can go away. IRQs are enabled in nouveau_card_init() now, rather than having the X server call drmCtlInstHandler(). We'll need this for when we give the kernel module its own channel. 4. DRM_NOUVEAU_GETPARAM Add CHIPSET_ID value, which will return the chipset id derived from NV_PMC_BOOT_0. 4. Use list_* in a few places, rather than home-brewed stuff.
2007-08-06nouveau: Pass channel struct around instead of channel id.Ben Skeggs
2007-07-20Replace DRM_IOCTL_ARGS with (dev, data, file_priv) and remove DRM_DEVICE.Eric Anholt
The data is now in kernel space, copied in/out as appropriate according to the This results in DRM_COPY_{TO,FROM}_USER going away, and error paths to deal with those failures. This also means that XFree86 4.2.0 support for i810 DRM is lost.
2007-07-20Replace filp in ioctl arguments with drm_file *file_priv.Eric Anholt
As a fallout, replace filp storage with file_priv storage for "unique identifier of a client" all over the DRM. There is a 1:1 mapping, so this should be a noop. This could be a minor performance improvement, as everything on Linux dereferenced filp to get file_priv anyway, while only the mmap ioctls went the other direction.
2007-07-20Remove DRM_ERR OS macro.Eric Anholt
This was used to make all ioctl handlers return -errno on linux and errno on *BSD. Instead, just return -errno in shared code, and flip sign on return from shared code to *BSD code.
2007-07-17nouveau: G8x PCIEGARTBen Skeggs
Actually a NV04-NV50 ttm backend for both PCI and PCIEGART, but PCIGART support for G8X using the current mm has been hacked on top of it.
2007-07-16drm: remove drmP.h internal typedefsDave Airlie
2007-07-14nouveau: nv10 and nv11/15 are differentPatrice Mandin
2007-07-13nouveau: nuke internal typedefs, and drm_device_t use.Ben Skeggs
2007-07-12nouveau: separate region_offset into map_handle and offset.Ben Skeggs
2007-07-12fixed object creation code to not Oops on 64bits, worked around memalloc not ↵Arthur Huillet
working on 64bit for PCIGART
2007-07-11fixed bug that prevented PCIE cards from actually using PCIGART - NV50 will ↵Arthur Huillet
probably still have a problem
2007-07-11nouveau: Some checks on userspace object handles.Ben Skeggs
2007-07-11Added support for PCIGART for PCI(E) cards. Bumped DRM interface patchlevel.Arthur Huillet
2007-07-09nouveau: Allocate mappable VRAM for notifiers..Ben Skeggs
2007-07-09nouveau/nv50: Initial channel/object supportBen Skeggs
Should be OK on G84 for a single channel, multiple channels *almost* work. Untested on G80.
2007-07-09nouveau: rewrite gpu object codeBen Skeggs
Allows multiple references to a single object, needed to support PCI(E)GART scatter-gather DMA objects which would quickly fill PRAMIN if each channel had its own. Handle per-channel private instmem areas. This is needed to support NV50, but might be something we want to do on earlier chipsets at some point? Everything that touches PRAMIN is a GPU object.
2007-06-28nouveau: Hack around possible Xv blit adaptor breakageBen Skeggs
2007-06-28nouveau: Nuke DMA_OBJECT_INIT ioctl (bumps interface to 0.0.7)Ben Skeggs
For various reasons, this ioctl was a bad idea. At channel creation we now automatically create DMA objects covering available VRAM and GART memory, where the client used to do this themselves. However, there is still a need to be able to create DMA objects pointing at specific areas of memory (ie. notifiers). Each channel is now allocated a small amount of memory from which a client can suballocate things (such as notifiers), and have a DMA object created which covers the suballocated area. The NOTIFIER_ALLOC ioctl exposes this functionality.
2007-06-25nouveau: NV49/NV4B PGRAPH setup from jb17bsome and stephan_2303Ben Skeggs
2007-06-24nouveau: kill some dead codeBen Skeggs
2007-06-24nouveau: NV04/NV10/NV20 PGRAPH engtab functionsBen Skeggs
NV04/NV10 load_context()/save_context() are stubs. I don't know enough about how they work to implement them sanely. The "old" context_switch() code remains hooked up, so it shouldn't break anything. NV20 will probably break if load_context() works. No inital context values are filled in, so when the first channel is created PGRAPH will probably end up having its state zeroed. Some setup from nv20_graph_init() will probably need to be moved to the per-channel context setup.
2007-06-24nouveau: NV3X PGRAPH engtab functionsBen Skeggs
2007-06-24nouveau: NV1X/2X/3X PFIFO engtab functionsBen Skeggs
Earlier NV1X chips use the NV04 code, see previous commits about NV10 RAMFC entry size.
2007-06-24nouveau: NV04 PFIFO engtab functionsBen Skeggs
2007-06-24nouveau: NV4X PGRAPH engtab functionsBen Skeggs
2007-06-24nouveau: NV4X PFIFO engtab functionsBen Skeggs
2007-06-24nouveau: split PFIFO/PGRAPH context creationBen Skeggs
2007-06-24nouveau: (mostly) hook up put_base againBen Skeggs
lass="hl opt">; ++mem) { mem->succeed_count = 0; mem->free_count = 0; mem->fail_count = 0; mem->bytes_allocated = 0; mem->bytes_freed = 0; } si_meminfo(&si); DRM(ram_available) = si.totalram; DRM(ram_used) = 0; } /* drm_mem_info is called whenever a process reads /dev/drm/mem. */ static int DRM(_mem_info)(char *buf, char **start, off_t offset, int request, int *eof, void *data) { drm_mem_stats_t *pt; int len = 0; if (offset > DRM_PROC_LIMIT) { *eof = 1; return 0; } *eof = 0; *start = &buf[offset]; DRM_PROC_PRINT(" total counts " " | outstanding \n"); DRM_PROC_PRINT("type alloc freed fail bytes freed" " | allocs bytes\n\n"); DRM_PROC_PRINT("%-9.9s %5d %5d %4d %10lu kB |\n", "system", 0, 0, 0, DRM(ram_available) << (PAGE_SHIFT - 10)); DRM_PROC_PRINT("%-9.9s %5d %5d %4d %10lu kB |\n", "locked", 0, 0, 0, DRM(ram_used) >> 10); DRM_PROC_PRINT("\n"); for (pt = DRM(mem_stats); pt->name; pt++) { DRM_PROC_PRINT("%-9.9s %5d %5d %4d %10lu %10lu | %6d %10ld\n", pt->name, pt->succeed_count, pt->free_count, pt->fail_count, pt->bytes_allocated, pt->bytes_freed, pt->succeed_count - pt->free_count, (long)pt->bytes_allocated - (long)pt->bytes_freed); } if (len > request + offset) return request; *eof = 1; return len - offset; } int DRM(mem_info)(char *buf, char **start, off_t offset, int len, int *eof, void *data) { int ret; spin_lock(&DRM(mem_lock)); ret = DRM(_mem_info)(buf, start, offset, len, eof, data); spin_unlock(&DRM(mem_lock)); return ret; } void *DRM(alloc)(size_t size, int area) { void *pt; if (!size) { DRM_MEM_ERROR(area, "Allocating 0 bytes\n"); return NULL; } if (!(pt = kmalloc(size, GFP_KERNEL))) { spin_lock(&DRM(mem_lock)); ++DRM(mem_stats)[area].fail_count; spin_unlock(&DRM(mem_lock)); return NULL; } spin_lock(&DRM(mem_lock)); ++DRM(mem_stats)[area].succeed_count; DRM(mem_stats)[area].bytes_allocated += size; spin_unlock(&DRM(mem_lock)); return pt; } void *DRM(calloc)(size_t nmemb, size_t size, int area) { void *addr; addr = DRM(alloc)(nmemb * size, area); if (addr != NULL) memset((void *)addr, 0, size * nmemb); return addr; } void *DRM(realloc)(void *oldpt, size_t oldsize, size_t size, int area) { void *pt; if (!(pt = DRM(alloc)(size, area))) return NULL; if (oldpt && oldsize) { memcpy(pt, oldpt, oldsize); DRM(free)(oldpt, oldsize, area); } return pt; } void DRM(free)(void *pt, size_t size, int area) { int alloc_count; int free_count; if (!pt) DRM_MEM_ERROR(area, "Attempt to free NULL pointer\n"); else kfree(pt); spin_lock(&DRM(mem_lock)); DRM(mem_stats)[area].bytes_freed += size; free_count = ++DRM(mem_stats)[area].free_count; alloc_count = DRM(mem_stats)[area].succeed_count; spin_unlock(&DRM(mem_lock)); if (free_count > alloc_count) { DRM_MEM_ERROR(area, "Excess frees: %d frees, %d allocs\n", free_count, alloc_count); } } unsigned long DRM(alloc_pages)(int order, int area) { unsigned long address; unsigned long bytes = PAGE_SIZE << order; unsigned long addr; unsigned int sz; spin_lock(&DRM(mem_lock)); if ((DRM(ram_used) >> PAGE_SHIFT) > (DRM_RAM_PERCENT * DRM(ram_available)) / 100) { spin_unlock(&DRM(mem_lock)); return 0; } spin_unlock(&DRM(mem_lock)); address = __get_free_pages(GFP_KERNEL, order); if (!address) { spin_lock(&DRM(mem_lock)); ++DRM(mem_stats)[area].fail_count; spin_unlock(&DRM(mem_lock)); return 0; } spin_lock(&DRM(mem_lock)); ++DRM(mem_stats)[area].succeed_count; DRM(mem_stats)[area].bytes_allocated += bytes; DRM(ram_used) += bytes; spin_unlock(&DRM(mem_lock)); /* Zero outside the lock */ memset((void *)address, 0, bytes); /* Reserve */ for (addr = address, sz = bytes; sz > 0; addr += PAGE_SIZE, sz -= PAGE_SIZE) { SetPageReserved(virt_to_page(addr)); } return address; } void DRM(free_pages)(unsigned long address, int order, int area) { unsigned long bytes = PAGE_SIZE << order; int alloc_count; int free_count; unsigned long addr; unsigned int sz; if (!address) { DRM_MEM_ERROR(area, "Attempt to free address 0\n"); } else { /* Unreserve */ for (addr = address, sz = bytes; sz > 0; addr += PAGE_SIZE, sz -= PAGE_SIZE) { ClearPageReserved(virt_to_page(addr)); } free_pages(address, order); } spin_lock(&DRM(mem_lock)); free_count = ++DRM(mem_stats)[area].free_count; alloc_count = DRM(mem_stats)[area].succeed_count; DRM(mem_stats)[area].bytes_freed += bytes; DRM(ram_used) -= bytes; spin_unlock(&DRM(mem_lock)); if (free_count > alloc_count) { DRM_MEM_ERROR(area, "Excess frees: %d frees, %d allocs\n", free_count, alloc_count); } } void *DRM(ioremap)(unsigned long offset, unsigned long size, drm_device_t *dev) { void *pt; if (!size) { DRM_MEM_ERROR(DRM_MEM_MAPPINGS, "Mapping 0 bytes at 0x%08lx\n", offset); return NULL; } if (!(pt = drm_ioremap(offset, size, dev))) { spin_lock(&DRM(mem_lock)); ++DRM(mem_stats)[DRM_MEM_MAPPINGS].fail_count; spin_unlock(&DRM(mem_lock)); return NULL; } spin_lock(&DRM(mem_lock)); ++DRM(mem_stats)[DRM_MEM_MAPPINGS].succeed_count; DRM(mem_stats)[DRM_MEM_MAPPINGS].bytes_allocated += size; spin_unlock(&DRM(mem_lock)); return pt; } void *DRM(ioremap_nocache)(unsigned long offset, unsigned long size, drm_device_t *dev) { void *pt; if (!size) { DRM_MEM_ERROR(DRM_MEM_MAPPINGS, "Mapping 0 bytes at 0x%08lx\n", offset); return NULL; } if (!(pt = drm_ioremap_nocache(offset, size, dev))) { spin_lock(&DRM(mem_lock)); ++DRM(mem_stats)[DRM_MEM_MAPPINGS].fail_count; spin_unlock(&DRM(mem_lock)); return NULL; } spin_lock(&DRM(mem_lock)); ++DRM(mem_stats)[DRM_MEM_MAPPINGS].succeed_count; DRM(mem_stats)[DRM_MEM_MAPPINGS].bytes_allocated += size; spin_unlock(&DRM(mem_lock)); return pt; } void DRM(ioremapfree)(void *pt, unsigned long size, drm_device_t *dev) { int alloc_count; int free_count; if (!pt) DRM_MEM_ERROR(DRM_MEM_MAPPINGS, "Attempt to free NULL pointer\n"); else drm_ioremapfree(pt, size, dev); spin_lock(&DRM(mem_lock)); DRM(mem_stats)[DRM_MEM_MAPPINGS].bytes_freed += size; free_count = ++DRM(mem_stats)[DRM_MEM_MAPPINGS].free_count; alloc_count = DRM(mem_stats)[DRM_MEM_MAPPINGS].succeed_count; spin_unlock(&DRM(mem_lock)); if (free_count > alloc_count) { DRM_MEM_ERROR(DRM_MEM_MAPPINGS, "Excess frees: %d frees, %d allocs\n", free_count, alloc_count); } } #if __OS_HAS_AGP DRM_AGP_MEM *DRM(alloc_agp)(int pages, u32 type) { DRM_AGP_MEM *handle; if (!pages) { DRM_MEM_ERROR(DRM_MEM_TOTALAGP, "Allocating 0 pages\n"); return NULL; } if ((handle = DRM(agp_allocate_memory)(pages, type))) { spin_lock(&DRM(mem_lock)); ++DRM(mem_stats)[DRM_MEM_TOTALAGP].succeed_count; DRM(mem_stats)[DRM_MEM_TOTALAGP].bytes_allocated += pages << PAGE_SHIFT; spin_unlock(&DRM(mem_lock)); return handle; } spin_lock(&DRM(mem_lock)); ++DRM(mem_stats)[DRM_MEM_TOTALAGP].fail_count; spin_unlock(&DRM(mem_lock)); return NULL; } int DRM(free_agp)(DRM_AGP_MEM *handle, int pages) { int alloc_count; int free_count; int retval = -EINVAL; if (!handle) { DRM_MEM_ERROR(DRM_MEM_TOTALAGP, "Attempt to free NULL AGP handle\n"); return retval; } if (DRM(agp_free_memory)(handle)) { spin_lock(&DRM(mem_lock)); free_count = ++DRM(mem_stats)[DRM_MEM_TOTALAGP].free_count; alloc_count = DRM(mem_stats)[DRM_MEM_TOTALAGP].succeed_count; DRM(mem_stats)[DRM_MEM_TOTALAGP].bytes_freed += pages << PAGE_SHIFT; spin_unlock(&DRM(mem_lock)); if (free_count > alloc_count) { DRM_MEM_ERROR(DRM_MEM_TOTALAGP, "Excess frees: %d frees, %d allocs\n", free_count, alloc_count); } return 0; } return retval; } int DRM(bind_agp)(DRM_AGP_MEM *handle, unsigned int start) { int retcode = -EINVAL; if (!handle) { DRM_MEM_ERROR(DRM_MEM_BOUNDAGP, "Attempt to bind NULL AGP handle\n"); return retcode; } if (!(retcode = DRM(agp_bind_memory)(handle, start))) { spin_lock(&DRM(mem_lock)); ++DRM(mem_stats)[DRM_MEM_BOUNDAGP].succeed_count; DRM(mem_stats)[DRM_MEM_BOUNDAGP].bytes_allocated += handle->page_count << PAGE_SHIFT; spin_unlock(&DRM(mem_lock)); return retcode; } spin_lock(&DRM(mem_lock)); ++DRM(mem_stats)[DRM_MEM_BOUNDAGP].fail_count; spin_unlock(&DRM(mem_lock)); return retcode; } int DRM(unbind_agp)(DRM_AGP_MEM *handle) { int alloc_count; int free_count; int retcode = -EINVAL; if (!handle) { DRM_MEM_ERROR(DRM_MEM_BOUNDAGP, "Attempt to unbind NULL AGP handle\n"); return retcode; } if ((retcode = DRM(agp_unbind_memory)(handle))) return retcode; spin_lock(&DRM(mem_lock)); free_count = ++DRM(mem_stats)[DRM_MEM_BOUNDAGP].free_count; alloc_count = DRM(mem_stats)[DRM_MEM_BOUNDAGP].succeed_count; DRM(mem_stats)[DRM_MEM_BOUNDAGP].bytes_freed += handle->page_count << PAGE_SHIFT; spin_unlock(&DRM(mem_lock)); if (free_count > alloc_count) { DRM_MEM_ERROR(DRM_MEM_BOUNDAGP, "Excess frees: %d frees, %d allocs\n", free_count, alloc_count); } return retcode; } #endif