[MPEGIF Discuss] Re: AVC-derived scalable video coding?
Ben Waggoner
ben interframemedia.com
Fri Sep 19 16:21:33 EDT 2003
on 9/19/03 2:37 PM, Rob Koenen (MPEGIF) at rob.koenen mpegif.org wrote:
> MPEG is working on scalable video coding and initial evidence is very
> promising. A call for proposals is being readied, and may go out in a few
> meetings (there are two more meetings this year, one in October and one in
> December). It may be possible to arrive at completion of such a standard
> sometime in 2005, I guess, but this is going out on a limb.
Sure. And with products following sometime behind that...
As bullish as I am about MPEG-4 in general, it makes me nervous that
MPEG-4 is likely at least three years out from having streaming as good as
competing formats had two years ago.
> I am not 100% sure what the thoughts in the JVT are about adding some sort
> of scalability to AVC. I know the idea has been coined a couple of times.
> That said, the AVC codec does allow seamless switching between streams of
> different rates, which would not qualify as true scalability (being able to
> derive useful video from subsets of the bitstream), but it does effectively
> provide the same functionality in many services and applications.
That would probably be a "good enough" solution, if it is dynamic.
> MPEG-4 part 2 has some scalability and it seems that there is interest in
> Simple Scalable Visual Profile in mobile environments. But for many types of
> scalability, including the Fine Grain Scalability currently in MPEG-4 part
> 2, the market seems to perceive the bitrate penalty on the highest quality
> as still too much.
I've heard this compliant a lot, and I think it is largely unfounded.
Even if FGS has a 25% bitrate penalty, it'd be worth it. Today, if one
wants to make a real-time streaming MPEG-4 file for the public internet, it
has to use the lowest common denominator data rate. Thus, if connection
speeds were a bell-shaped distribution around 400 Kbps, one might pick a
data rate of 200 Kbps in order to get 90% of users. Assuming the 25%
performance penalty, that means that the 200 Kbps straight would look as
good as 250 Kbps. So for the ~75% or so of users who could sustain 250 Kbps
or higher, they actually get a worse result than without FGS. And those
unlucky 10% who are at less than 200 Kbps will definitely be better than
FGS.
And this isn't even counting what happens with variable connection
speeds. Those using cable modems, or a shared connection in an office, can
see data rates vary between 100-3000 Kbps during a single session.
Sure, FGS is slightly sub-optimal when you know the exact available data
rate in advance. But since real-world data rates are so unknowable, a FGS
solution would be extremely useful. Scalability can beat raw compression
efficiency in a wide class of situations.
Ben Waggoner <http://www.benwaggoner.com>
Compressed Video Consulting, Training, and Encoding
My Book: <http://www.benwaggoner.com/books.htm>
Cleaner e-book: <http://www.cmpbooks.com/cleaner>
More information about the Discuss
mailing list