Versatile Video Coding: In conversation with Nokia’s Ville-Veikko Mattila
Managing IP is part of Legal Benchmarking Limited, 4 Bouverie Street, London, EC4Y 8AX
Copyright © Legal Benchmarking Limited and its affiliated companies 2024

Accessibility | Terms of Use | Privacy Policy | Modern Slavery Statement

Versatile Video Coding: In conversation with Nokia’s Ville-Veikko Mattila

Sponsored by

Nokia logo RGB-Black.jpg
ResrcID38863_Girl_VR_Headset_Logo_16x9 resized.jpg

Managing IP sat down with Nokia’s head of multimedia technologies, Ville-Veikko Mattila, to discuss the cutting-edge applications of Versatile Video Coding, and the societal challenges that could be overcome via digital standardisation

MIP: Could you provide a brief explanation of the Versatile Video Coding (VVC) video compression standard?

Ville-Veikko: H.266/VVC is the latest video compression standard, developed collaboratively between ITU-T and MPEG of ISO/IEC, which was finalised in July 2020. It has a few different names, it’s called H.266 from the ITU-T side and VVC from the MPEG side.

In the development of video coding standards, there’s always a requirement to at least halve the bitrate compared to the previous codec generation, or in other words, provide 50% more compression gain without reducing picture quality. VVC provides this.

But the codec is also much more flexible for different types of emerging video services, like game streaming, immersive VR, 360-degree video and other new content types. It is also well suited for 5G, or soon to be 6G services.

It also has a low latency component. It’s ideal for cloud gaming or any kind of remote operations, such as remotely steering a vehicle. It’s a very important video codec for anything that requires such low latency.

MIP: How have recent developments such as the pandemic changed the demand for, and application of, VVC?

Ville-Veikko: The pandemic has fundamentally changed how we work, and taught all of us to use remote collaboration tools, with video conferencing obviously playing a large role.

For example, if you look at the video conference we’re using right now. Without video compression, this wouldn’t be possible.

But the same applies to social media and video entertainment. Video is crucial to modern social media applications as it allows you to tell more immersive stories, particularly as we move towards higher resolutions like 4K and 8K. This high-quality video requires efficient video compression, and VVC provides that.

MIP: Could you describe some other features of VVC?

Ville-Veikko: It has more dynamics for displaying colours, basically meaning greener greens and redder reds, a big improvement on the previous codec.

There are also tools for screen content, or synthetic content. Gaming is a good example. The new codec is much better equipped to handle this content than previous iterations.

It’s also well-suited to a mix of both captured and synthetic content, for example when the news might display a mixture of live footage and graphics.

VVC also has something called ‘reference picture resampling’, a feature which is particularly well adapted to video streaming and low delay scenarios since it allows seamless frame-based bit-rate adaptation.

MIP: What role has Nokia played in the development and implementation of VVC?

Ville-Veikko: Nokia has been here for more than 30 years so we’ve always been contributing to these video standards. Of course, every new standard inherits something from the previous one, so it is a constant evolution of technology.

Overall, we had a broad contribution to the H.266/VVC standard, but as examples, we contributed to efficient intra coding, independent sub-picture coding for immersive VR or 360-degree video streaming, low-latency video coding, and high-level syntax specification which forms the codec’s system and transport interface.

If you think about 4K, 360-degree video capture, the bitrate from the camera is something like 50 megabits per second. Nokia’s viewport-dependent streaming technology can take this down to even three or four megabits per second without compromising the picture quality. It’s a great reduction, and of course VVC plays a role here.

I already mentioned the low latency video possibilities, and the tool is called ‘gradual decoding refresh.’ It’s a good example of where we implemented the technology in very high-quality software package that has potential for all kinds of commercial applications.

MIP: How will VVC transform the way we consume video content?

Ville-Veikko: As the video industry evolves and content becomes increasingly immersive, the media and entertainment industry needs a new codec flexible enough to support a diverse range of experiences. High-definition video is a powerful component of the mobile handset market. A statistic that might surprise people: 80% of all 5G traffic is video. This requires a very efficient codec such as VVC to take the bandwidth down and reduce mobile battery consumption.

MIP: Moving on to multimedia standardisation more broadly, why does standardisation matter?

Ville-Veikko: There are many benefits to standardisation, and one key thing is ensuring interoperability across multiple systems. When you create something it’s great that you know there’s other technology around and it all works together.

Video coding, for example, produces a compressed bitstream. You then need a means to package and transport the content to users, e.g., over video streaming, and that’s when you come to the standardisation of media carriage formats and transport technologies.

In fact, I think open standardisation supports the industry at large by promoting collaborative competition for the creation of the best technical solutions and complete systems. Whereas in closed standards there is no competition, a lock-in and possibly a lack of innovation and consumer choice.

MIP: How has Nokia contributed towards multimedia standardisation and what is its position now?

Ville-Veikko: Since the year 2000, we have created almost 5000 inventions that enable multimedia products and services. We have made continuous investments to advance entire industries – not just multimedia – we have invested something like €140 billion in research and development since 2000. Last year alone we invested €4.5 billion.

We’ve contributed to multimedia standardisation at ISO/IEC (MPEG), ITU-T, IETF, 3GPP for over 30 years, and are a leading company in standardisation with special impacts, for example the evolution of video coding and voice coding standards. In 2003 we standardised H.264/Advanced Video Coding, which was an important steppingstone towards the introduction of H.265/High Efficiency Video Coding in 2013 and H.266/VVC in 2020.

We’re also part of 3GPP which standardises multimedia codecs, systems and services. This includes communication platforms and real-time applications. Here, we’ve also played an integral role in the standardisation of voice communications, particularly regarding teleconferencing. Nokia is now working on something called IVAS, or Immersive Voice and Audio Services, which will be released by the end of this year. It will introduce immersive spatial audio to mobile communication: imagine you are sitting in a beautiful park and the birds are singing all around you. Imagine being able to share that full dynamic sound over an audio call.

MIP: At a consumer level, what are the benefits of multimedia standardisation and through what types of technologies are those benefits seen?

Ville-Veikko: All the applications we use on a daily basis require certain standardisation. Not only one, but multiple standards working together. Of course, video coding is a fundamental part of that.

If you take the raw bitrate of a 4K video that is running at 30 frames per second, it’s six gigabits a second in terms of data. If it’s 60 frames per second, it goes up to 10 gigabits per second.

If you then think about your home connectivity, maybe your internet runs at around 100 megabits per second.

After compression you get it down to about 15 megabits per second. So, compression and the effect of standardisation practices is huge.

On the consumer level, I could mention ‘random access’ as an example of Nokia’s contribution. Every time you jump to a desired scene within a video stream, continue watching a movie at a later time, switch channels on TV or join a video conference after it’s already started, you use a video decoding functionality called random access. It's hard to imagine life without these features, given all the ways we consume video today!

MIP: What opportunities does multimedia standardisation offer, and what are the main challenges?

Ville-Veikko: At a higher-level view, what really matters? We know the pressure on our planet. Carbon emissions, the use of natural resources. Productivity is stalling, and Nokia is very conscious that we are bringing digitalisation to physical industries.

Then there’s access to education, healthcare and work in developing in countries. These are global topics.

Nokia has a strategy for ESG, and sustainability is part of our core. If you take the environment, it’s about optimising networks for energy consumption, minimising that consumption. With video compression as an example, suddenly you have much less data to transport, this means less energy used per transported data.

With industrial digitalisation – we recently launched a new product called Real-Time Extended Reality Multimedia. This enables a highly efficient and low-latency delivery of 360-degree video for industrial applications, for digital twins, remote steering of vehicles, and quality control and more.

Finally, multimedia plays a key role in bridging the digital divide, for example, by the utilisation of videoconferencing for education and healthcare.

MIP: Does multimedia standardisation have an important role to play in terms of the development of the metaverse?

Ville-Veikko: The metaverse is on everyone’s lips, and there will be industrial, enterprise and consumer metaverses.

If you think of the devices required, like head mounted displays and augmented reality glasses, it comes back to interoperability. It’s also about new content types, such as capturing yourself as a holograph, bringing it to a virtual space.

As the content becomes more complex, it requires more data, and therefore you need better compression again.

The metaverses are becoming lifelike, and for that you need that low latency operation. Low latency video is so important for the metaverse.

Find out more at

more from across site and ros bottom lb

More from across our site

The FRAND rate is only 5 cents higher than the per-device rate determined at first instance in 2023
We provide a rundown of Managing IP’s news and analysis from the week, and review what’s been happening elsewhere in IP
Nearly four months after joining Crowell & Moring, Edward Taelman reflects on starting afresh, new clients, and firm culture
Firms discuss the ebb and flow of life sciences IP work and explain how they help professionals pivot between specialities
Mercedes-Benz, Dolby Laboratories, and Panasonic discuss the merits and drawbacks of the USPTO's terminal disclaimer proposal
In-house counsel believe Chinese domestic firms are becoming as sophisticated as international firms, but they may not shift their portfolios just yet
The Court of Appeals for the Federal Circuit is looking to renew a ban that has prevented Judge Pauline Newman from hearing cases
The list of the top representative firms at the UPC may yield few surprises but their success did not come free
The German firms have accounted for 26% of all infringement actions, while US corporations appear interested in litigating at the forum, a report has revealed
Vincent Brault tells us how he fits kitesurfing into his lunchtime routine and why IP is no longer seen as ‘nerdy’
Gift this article