summaryrefslogtreecommitdiff
path: root/intel/intel_bufmgr.h
AgeCommit message (Collapse)Author
2012-01-04intel: Add an interface for setting the output file for decode.Eric Anholt
Consumers often want to choose stdout vs stderr, and for testing I want to output to an open_memstream file. Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2011-12-29intel: Get intel_decode.c minimally building.Eric Anholt
My plan is to use this drm_intel_dump_batchbuffer() interface for the current GPU tools, and the current Mesa batch dumping usage, while eventually building more interesting interfaces for other uses. Warnings are currently suppressed by using a helper lib with CFLAGS set manually, because the code is totally not ready for libdrm's warnings setup. Acked-by: Daniel Vetter <daniel.vetter@ffwll.ch> Acked-by: Eugeni Dodonov <eugeni@dodonov.net>
2011-12-05intel: Add an interface to limit vma cachingChris Wilson
There is a per-process limit on the number of vma that the process can keep open, so we cannot keep an unlimited cache of unused vma's (besides keeping track of all those vma in the kernel adds considerable overhead). However, in order to work around inefficiencies in the kernel it is beneficial to reuse the vma, so keep a MRU cache of vma. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
2011-10-28intel: Add an interface for removing relocs after they're added.Eric Anholt
This lets us replace the current inner drawing loop of mesa: for each prim { compute bo list if (check_aperture_space(bo list)) { batch_flush() compute bo list if (check_aperture_space(bo list)) { whine_about_batch_size() fall back; } } upload state to BOs } with this inner loop: for each prim { retry: upload state to BOs if (check_aperture_space(batch)) { if (!retried) { reset_to_last_prim() batch_flush() } else { if (batch_flush()) whine_about_batch_size() goto retry; } } } This avoids having to implement code to walk over certain sets of GL state twice (the "compute bo list" step). While it's not a performance improvement, it's a significant win in code complexity: about -200 lines, and one place to make mistakes related to aperture space instead of N places to forget some BO we should have included. Note how if we do a reset in the new loop , we immediately flush. We don't need to check aperture space -- the kernel will tell us if we actually ran out of aperture or not. And if we did run out of aperture, it's because either the single prim was too big, or because check_aperture was wrong at the point of setting up the last primitive. Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2011-06-04intel: Add interface to query aperture sizes.Chris Wilson
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
2010-12-19intel: Export CONSTANT_BUFFER addressing modeChris Wilson
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
2010-11-25intel: Add a forward declaration of struct drm_clip_rectChris Wilson
... so that intel_bufmgr.h can be compiled standalone. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
2010-08-26Avoid use of c++ reserved keyword "virtual" when using a C++ compiler.Eric Anholt
Avoids requiring nasty hacks around libdrm headers in the new C++ parts of Mesa drivers.
2010-06-06intel: Add support for kernel multi-ringbuffer API.Zou Nan hai
This introduces a new API to exec on BSD ring buffer, for H.264 VLD decoding. Signed-off-by: Xiang Hai hao <haihao.xiang@intel.com> Signed-off-by: Zou Nan hai <nanhai.zou@intel.com>
2010-05-11intel: query whether a buffer is reusable.Chris Wilson
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
2010-03-02libdrm/intel: e * and/or sell copies of the Software, and to permit persons to whom the * Software is furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice (including the next * paragraph) shall be included in all copies or substantial portions of the * Software. * * THE SOFTWARE IS PROVIDED "AS
2009-11-20Merge remote branch 'origin/master' into libdrmKristian Høgsberg
2009-11-17Move libdrm/ up one levelKristian Høgsberg
egative value on failure. * * Allocate and initialize a drm_device_dma structure. */ int drm_dma_setup(struct drm_device *dev) { int i; dev->dma = drm_alloc(sizeof(*dev->dma), DRM_MEM_DRIVER); if (!dev->dma) return -ENOMEM; memset(dev->dma, 0, sizeof(*dev->dma)); for (i = 0; i <= DRM_MAX_ORDER; i++) memset(&dev->dma->bufs[i], 0, sizeof(dev->dma->bufs[0])); return 0; } /** * Cleanup the DMA resources. * * \param dev DRM device. * * Free all pages associated with DMA buffers, the buffers and pages lists, and * finally the drm_device::dma structure itself. */ void drm_dma_takedown(struct drm_device *dev) { struct drm_device_dma *dma = dev->dma; int i, j; if (!dma) return; /* Clear dma buffers */ for (i = 0; i <= DRM_MAX_ORDER; i++) { if (dma->bufs[i].seg_count) { DRM_DEBUG("order %d: buf_count = %d," " seg_count = %d\n", i, dma->bufs[i].buf_count, dma->bufs[i].seg_count); for (j = 0; j < dma->bufs[i].seg_count; j++) { if (dma->bufs[i].seglist[j]) { drm_pci_free(dev, dma->bufs[i].seglist[j]); } } drm_free(dma->bufs[i].seglist, dma->bufs[i].seg_count * sizeof(*dma->bufs[0].seglist), DRM_MEM_SEGS); } if (dma->bufs[i].buf_count) { for (j = 0; j < dma->bufs[i].buf_count; j++) { if (dma->bufs[i].buflist[j].dev_private) { drm_free(dma->bufs[i].buflist[j]. dev_private, dma->bufs[i].buflist[j]. dev_priv_size, DRM_MEM_BUFS); } } drm_free(dma->bufs[i].buflist, dma->bufs[i].buf_count * sizeof(*dma->bufs[0].buflist), DRM_MEM_BUFS); } } if (dma->buflist) { drm_free(dma->buflist, dma->buf_count * sizeof(*dma->buflist), DRM_MEM_BUFS); } if (dma->pagelist) { drm_free(dma->pagelist, dma->page_count * sizeof(*dma->pagelist), DRM_MEM_PAGES); } drm_free(dev->dma, sizeof(*dev->dma), DRM_MEM_DRIVER); dev->dma = NULL; } /** * Free a buffer. * * \param dev DRM device. * \param buf buffer to free. * * Resets the fields of \p buf. */ void drm_free_buffer(struct drm_device *dev, struct drm_buf *buf) { if (!buf) return; buf->waiting = 0; buf->pending = 0; buf->file_priv = NULL; buf->used = 0; if (drm_core_check_feature(dev, DRIVER_DMA_QUEUE) && waitqueue_active(&buf->dma_wait)) { wake_up_interruptible(&buf->dma_wait); } } /** * Reclaim the buffers. * * \param file_priv DRM file private. * * Frees each buffer associated with \p file_priv not already on the hardware. */ void drm_core_reclaim_buffers(struct drm_device *dev, struct drm_file *file_priv) { struct drm_device_dma *dma = dev->dma; int i; if (!dma) return; for (i = 0; i < dma->buf_count; i++) { if (dma->buflist[i]->file_priv == file_priv) { switch (dma->buflist[i]->list) { case DRM_LIST_NONE: drm_free_buffer(dev, dma->buflist[i]); break; case DRM_LIST_WAIT: dma->buflist[i]->list = DRM_LIST_RECLAIM; break; default: /* Buffer already on hardware. */ break; } } } } EXPORT_SYMBOL(drm_core_reclaim_buffers);