While system on chip (SoC) integrators tackle the challenge of packing more features into their designs with fewer resources and less time, what really drives system complexity is software. The amount of software implemented on SoCs grows exponentially as features and functions are added. In many ways, this makes the hardware IP implementation seem almost easy by comparison. For example, the number of IP blocks implemented per SoC is increasing, and will continue to do so (Figure 1). An unseen cost behind each of those IP blocks is the software code required to support the IP. While this increase in the number of IP blocks in SoCs has lead to a lot of discussion about subsystems, you may not be aware of the real motivator behind this trend: software and software integration. Successful IP subsystems are more than IP blocks. IP subsystems must include a full software solution, or it will be of limited value.
Each subsystem or IP block inside an SoC requires its own software stack. Each of the individual software stacks then need to be integrated into the application software that is running on the host processor, and then everything must be verified. In the case of an audio core or subsystem, a software stack includes audio processing software (from companies like Dolby, SRS, DTS, Microsoft, etc.) that needs to be integrated in an overall media streaming framework.
Figure 1: Growth in the number of IP blocks per SoC
Growth in audio complexity
Examples of these highly complex SoC systems can be found in the consumer digital TVs, set-top boxes, media players, Blu-ray Disc players, speaker bars, mobile devices, and so on. For each of these applications, audio is a key function. Audio processing itself is also growing in complexity. For example, internet connectivity gives consumer devices access to practically unlimited content, and so these devices must support a wide range of audio compression formats and high-quality audio post-processing.
The Blu-ray Disc standard has accelerated the rollout of higher audio quality levels. Consumers expect resolutions with up to 24-bit precision, sampling rates up to 192 kHz and surround sound up to 7.1 channels. In addition, high definition audio formats, including DTS HD-Master Audio, Dolby TrueHD and Microsoft WMA 10 Pro, are available in many consumer devices.
Mobile phones and tablets now support stereo (2.0) audio output for headphones and up to 7.1 audio channels via HMDI, such that they can be connected to home theater systems and stream high definition video and audio.
Consumer products require innovative audio post-processing features such as sound enhancement and adaptive volume control. Stereo (2.0 channels) content can be transformed into multi-channel (for example 5.1 or 7.1) to create a fantastic surround experience (playback of stereo content on multi-channel devices). Concurrently, devices with basic 2.0 or 2.1 audio outputs can provide an amazing virtual surround experience through headphones or two small speakers. To bring these experiences to consumers, the SoC must implement post-processing technologies such as SRS WOW HD, DTS Neo:6 and Dolby Virtual Speaker.
As you can see, the amount of audio processing that is integrated in SoCs has grown significantly and requires efficient implementation. To simplify implementation, designers are using dedicated audio subsystems, optimized for audio processing, to offload the host processor. These audio subsystems include either a single- or dual-core audio processor, where dual-core audio processors are preferred for high-performance audio subsystems. Dual-core audio processors can offer then standard benefit of more processing capacity at a low clock frequency. However, some SoC manufacturers use the dual-core capability to support their audio software on one core, while reserving the second for their customers to add proprietary post-processing functions for product differentiation.
DesignWare SoundWave Audio Subsystem
As most of the effort in SoC design is spent on software development and system integration, audio subsystems must allow for system integration to begin early in product development. This is the only way to offer a dramatic improvement in time to market. To address this need, the DesignWare® SoundWave Audio Subsystem includes not just all hardware, but also the complete software stack plus prototyping support.
Since "one-size-fits-all" is not reality in complex audio systems, the SoundWave subsystem includes a configuration tool that enables designers to configure and create the entire subsystem (including all the software). This can be done in a matter of hours, rather than weeks or months, thus significantly reducing integration effort and project risk.
Figure 2: SoundWave Audio Subsystem hardware architecture
The SoundWave Audio Subsystem's hardware includes a powerful and efficient single or dual-core ARC® Audio Processor that can be fully configured by the user for optimal fit. ARC Audio processors are tolerant to system memory latencies, which are present in designs where audio processors share DDR memory with, for example, video or graphics data. For off-chip connectivity, the SoundWave Audio Subsystem includes I2S and S/PDIF digital audio interfaces. The S/PDIF interface can also be used as a high-bandwidth, on-chip, audio link to Synopsys HDMI IP. Users can integrate analog audio codecs for microphone, line input, line output, headphone and speakers within the subsystem. A smart local interconnect includes a flexible data buffer (FlexFiFo) that eliminates the need for local buffering in the interfaces, thereby reducing area and simplifying the software interface. The SoundWave Audio Subsystem does not require additional DMA IP for streaming audio data from and to the peripherals, and the audio to and from system sources, such as HDMI, USB, or other connectivity IP, is processed and rendered via system memory.
Figure 3: SoundWave Audio Subsystem architecture
The SoundWave Audio Subsystem contains the complete software stack needed to create any audio application. All the drivers for the peripherals, the embedded RTOS, and the software infrastructure are included, as well as a library of basic post-processing functions such as bass management, treble control, equalizers and surround balance. Users can easily include decoders, encoders and advanced post-processing software from Dolby, DTS, SRS, Microsoft and popular formats such as FLAC, MP3, AAC-LC and many more. All these functions are made available in a Media Streaming Framework (MSF) that allows users to create audio dataflow graphs for their applications. All the features of the SoundWave Audio Subsystem are made available on the host processor via the GStreamer audio plug-in .
Using the same audio software that later will run on silicon, prototyping accelerates integration of an audio subsystem in the SoC. Synopsys' HAPS® FPGA-Based Prototyping Systems facilitate faster system integration and hardware/software validation. The SoundWave Audio Subsystem, plus its software stack, can easily be embedded in the SoC that is mapped onto the FPGA. With a virtual prototype of the SoC, application software can be created even before the hardware design is available.
To ease SoC development, lower risk, and accelerate time-to-market, designers will use subsystems like the SoundWave Audio Subsystem. With complete, pre-verified IP subsystems, which include the hardware, software and prototypes, designers can solve their design issues at the chip level rather than the individual block level.