In September Andrew Brearley from MediaSmiths posted an interesting blog article (http://tiny.cc/ike3gqqq1) about the woes of compression within a Post Production workflow. And this led me to think long and hard about the valid case he put forward. But what, in the practical world, does that mean to a post house?
Tape has been with us for many years, and, for almost all of its history, compression has closely followed it. Most noticeably, compression fell into the two camps of Digi Beta with its DCT encoding (2:1 compressed - even though this was treated by the production as uncompressed!), and (a firm favourite) DV 4:1:1 or 4:2:0 (5:1 compressed – remember the arguments we had with this format by broadcasters?).
Unless you were dealing with the likes of D1 material, then compression was an everyday feature of editing pictures, be it in a linear or non-linear sense. When NLE systems became capable of capturing images suitable for transmission (AVR75 anyone?), compression had to be used in order to allow the locally-attached disks to handle the data-rates. As time has progressed, uncompressed SD streams from local drives have become the norm, as indeed has multi-stream SD playback from centralised storage (SAN solutions), a point we will come back to later.
Timed almost to perfection, not only did the production industry leap into HD images using tape formats, HDCam or HDCam SR 4:4:4 for example, but these vastly detailed images fuelled the need for source compression to reduce the data-rate overheads while maintaining picture quality, which in turn led to file-based cameras and acquisition codecs.
With the current migration from tape to file-based camera formats, it is impossible to escape any need for post houses to pay close attention to the size of files the compression creates, as well as how much the codec compresses the image. Video compression codecs offer varying degrees of compression, both spatial and spectral, but, broadly speaking, they fall into either IFrame or LongGOP compression types.
IFrame is particularly useful for editing, as each frame is uniquely described (though each frame also has a heavy storage footprint), whereas LongGOP can be much more problematic to edit with. A majority of frames need to be calculated from a ‘full’ IFrame image located some frames away on a system’s timeline from the viewing frame in question.
It’s also fair to say that you need to understand which specific file you are dealing with, as its very acquisition format may well dictate if material can be used in the production of true HD images suitable for transmission. 35Mb HD images, for example, are not accepted by the likes of Sky and the BBC.
Sadly, in the space where most Post Houses operate, source compression is an evil you need to come to terms with, as it’s firmly here to stay, with the likes of Super Hi-Vision (SH-V) looming on the horizon (some distance off I’m glad to say). Already being investigated by the BBC and NHK, this format is rated at 4320p, and has 16 times the resolution of HD 1080i transmissions (7680 x 4320) at 60 frames per second (fps). SH-V also offers 22.2 channels of audio. The data-rates needed, by today’s standards, are staggering, as a single stream requires a storage system with an 8TB uncompressed disk array and a 24Gb/s transfer rate!
And the compression used here? ‘Dirac’, a video compression system devised by the BBC, which is based on wavelets and differs from MPEG in that it does not break an image into blocks or compress these blocks for the purpose of transmission. Instead, according to NHK researcher Yoshiaki Shishikui, it delivers images by means of a series of approximations of increasing resolution.
Post Compression and Compression Types
Once we understand that tape is fast becoming a breed of the past we need to see how this affects the equipment and infrastructure a facility will need to survive in the not-too-distant future. Your facility will need to ensure that its workstations and edit suites are powerful enough to handle the decompression of the acquisition formats. For example, I have found that an IFrame-based codec such as Avid’s DNX120 is light on the CPU for playback, placing only a 12-15% load on a dual core xw4600 we have here in the office, However, an H.264-compressed image from a Canon D5 MkII camera was taking over 80% of the CPU’s effort to play a single stream back! Some of the old workstations at a post house will simply have to go in an ever CPU-hungry environment!
Sadly, the option for most people of preserving the quality of an image and converting it into uncompressed IFrame edit-friendly media is not an option, be this because of the sheer volume of media that would have to be ‘transcoded / transwrapped’, or the time needed to carry out this operation. Not to mention the storage needed to receive the new media that would be created.
As much as an offline / online process has been thought of as dying out, the actual volume of material that can now being created for productions using low-cost cameras means that this practice is still very much in place, which in turn means that further compression for the offline material is needed.
Maybe automated proxy creations, as suggested by Andrew Brearley (which means perhaps Final Cut server now does have its place in the smaller production community?), for importing material and ‘transcoding’ to an agreed resolution for the offline?
In either case, further time is needed to facilitate this ‘conversion’ operation, which at one time was handled during the ingest process from tape, but now needs to be handled somewhere else.
HD, by its very nature, means that the images that are being dealt with are on average 4 times the size of SD images. Therefore, in an uncompressed world, using the rule of thumb, HD images occupy 4 times the space SD images occupy, unless we’re talking about dual-link HD where the figure is even greater still.
This is merely the physical storage requirement, and does not take into account the bandwidth requirements on the storage being used – be this local disks or network-shared storage. This is where source compression actually helps a post house, as its data-rate is always much lighter than that of its ‘uncompressed’ ideal, which in turn allows it to be delivered over modest 1Gb or 10Gb Ethernet networks rather than bespoke high speed Fibre or Infiniband Networks.
This leads on to the need to store rushes in a secure (RAID) location on a network-available device, which can be as simple as a cost-effective NAS device. This will allow for access to material should it need to be revisited for conforming needs, as well as providing a secure backup location.
Some productions may be small enough to have the material attached directly to the NLE workstation or copied to high performance disk sets, if needed. Or, in the case of productions that need shared access to material, it can also be copied to workspaces on high-performance, NLE-ready SANs.
For most post houses running various suites, all forms of storage would need to be addressed to ensure that the most effective workflow can be established, and while the ideal would be to have an infinite amount of storage on the fastest network available this simply is not achievable. Therefore stepping stones and tiered storage is currently the most cost-effective solution for the masses.
Summary for the real world Post.
File based workflows in Post bring with them…
Compression: You can not get away from it, whether you want to or not (for most of us mere mortals that is). Be careful and mindful what you do with your rushes if you want to maintain your final finished picture quality.
Storage: This is best broken down into further subsections:
1. Rushes: Most rushes will be received on ‘commodity’ storage i.e. Firewire or USB drives. These will probably be moved to more secure ‘RAID’ storage which will equally have to be cheap commodity network available storage.
2. Editing: This could be local storage for high image resolutions such as DualLink HD or even network storage for ‘compressed’ material. Be mindful of your total capacity and the performance your storage, as this is directly linked to your ability to handle some resolutions.
Workstations: When using some native source compression codes, there can be a large overhead to the base workstations being used to carry out the edit. Ensure your CPU is up to it and running the latest version of NLE software to allow you to access the source footage.
Above all, TEST, TEST, TEST! Ask for some sample footage from the camera being used and actually push it through your simulated workflow, as the devil is always in the detail!
Is Andrew asking too much of the current model for independent post production facilities? Well, perhaps not. The need to re-tool is vital in moving forward within the ever-evolving production market,. Failure to do so simply means the death of a company. With prices for software (NLE and storage) and high-performance, generic, IT-based tools dropping all the time, the argument is more that the post houses should understand what is available to them as technologies, and then use them in an effective way. Innovation is always key to any market place, and, in this case, the innovation needs to be provided from within the post houses, in terms of how they can facilitate the current needs of their clients.