Tuesday, 10 February 2009

Success!

Have got VDPAU rendering to a GL texture, and displaying on the screen.  Have some issues with playback speed to work through but it great to see things working on screen!

CPU usage is incredible.

Progress at last!

Made some major progress with this tonight/today.  Hopefully some good news coming soon, i've managed to get VDPAU rendering onto the glTexture fine now...  turned out to be a combination of several silly bugs in the routine I was using to create the GLXPixmap and texture.  

A good friend and I have been working pretty hard on this over the past few days so it's nice to see some fruit of the labour.

Saturday, 7 February 2009

R.I.P. X

Been a long and unproductive day - was initially having glXBindTexImageEXT slap me with BadWindow messages, despite the Display* and pixmap being checked.  Got around that to have it crash X each time it was called instead.  Finally making some progress again now.  Have the VDPAU presentation queue spitting out processed frame and i'm dumping them to disk at a surprisingly fast rate.

XBMC VDPAU

This is a quick and dirty blog to document my efforts at bringing hardware decoding to XBMC.  So far its been a pretty steep learning curve, alot of the techniques involved in adding VDPAU to XBMC aren't required by the other projects which have so far implemented it... specifically OpenGL texturing.

This, to be honest, is a pain in the arse.  So far i've got XBMC working pretty much perfectly decoding and rendering it's output to an X11 surface (the same method as used in the mplayer patches, and MythTV), however this surface now needs to be transferred to an OpenGL texture ready for it to be rendered by XBMC.  I've got this working, but the current implementation i've used copies the Pixmap to the CPU, creates a GL texture out of it, and then dispatches it back to the GPU ready for displaying.  Not a problem at SD, but with the 'killa' sample - the infamous scene from Planet Earth - which is a short piece of high bandwidth h264 @ 1920x1080 things here are a little more problematic.

My AMD 5000+ using FFmpeg software decoding manages this scene with around 210 dropped frames (admittedly not the best method of determining performance), using the VDPAU->CPU->OpenGL method i've got this down to around 90 dropped frames, but still using a massive chunk of CPU time... which is the part i'd like to reduce the most.

Currently i'm experimenting with performing this process totally on the GPU.  According to sources on nvnews.net (primarilly Stephen Warren from NVIDIA), the best method to use would be an OpenGL extension - glXBindTexImageEXT.  This is proving to be somewhat tricky, primarily down to the lack of decent documentation and prior usage.  I'll keep bashing away!