ref: 0a0d07c193d505e5ad9acd49834cee5a11e1bed9
dir: /doc/draft-ietf-codec-opus.xml/
<?xml version='1.0'?> <!DOCTYPE rfc SYSTEM 'rfc2629.dtd'> <?rfc toc="yes" symrefs="yes" ?> <rfc ipr="trust200902" category="std" docName="draft-ietf-codec-opus-02"> <front> <title abbrev="Interactive Audio Codec">Definition of the Opus Audio Codec</title> <author initials="JM" surname="Valin" fullname="Jean-Marc Valin"> <organization>Octasic Inc.</organization> <address> <postal> <street>4101, Molson Street</street> <city>Montreal</city> <region>Quebec</region> <code></code> <country>Canada</country> </postal> <phone>+1 514 282-8858</phone> <email>[email protected]</email> </address> </author> <author initials="K." surname="Vos" fullname="Koen Vos"> <organization>Skype Technologies S.A.</organization> <address> <postal> <street>Stadsgarden 6</street> <city>Stockholm</city> <region></region> <code>11645</code> <country>SE</country> </postal> <phone>+46 855 921 989</phone> <email>[email protected]</email> </address> </author> <date day="14" month="November" year="2010" /> <area>General</area> <workgroup></workgroup> <abstract> <t> This document describes the Opus codec, designed for interactive speech and audio transmission over the Internet. </t> </abstract> </front> <middle> <section anchor="introduction" title="Introduction"> <t> We propose the Opus codec based on a linear prediction layer (LP) and an MDCT-based enhancement layer. The main idea behind the proposal is that the speech low frequencies are usually more efficiently coded using linear prediction codecs (such as CELP variants), while the higher frequencies are more efficiently coded in the transform domain (e.g. MDCT). For low sampling rates, the MDCT layer is not useful and only the LP-based layer is used. On the other hand, non-speech signals are not always adequately coded using linear prediction, so for music only the MDCT-based layer is used. </t> <t> In this proposed prototype, the LP layer is based on the <eref target='http://developer.skype.com/silk'>SILK</eref> codec <xref target="SILK"></xref> and the MDCT layer is based on the <eref target='http://www.celt-codec.org/'>CELT</eref> codec <xref target="CELT"></xref>. </t> <t>This is a work in progress.</t> </section> <section anchor="hybrid" title="Opus Codec"> <t> In hybrid mode, each frame is coded first by the LP layer and then by the MDCT layer. In the current prototype, the cutoff frequency is 8 kHz. In the MDCT layer, all bands below 8 kHz are discarded, such that there is no coding redundancy between the two layers. Also both layers use the same instance of the range coder to encode the signal, which ensures that no "padding bits" are wasted. The hybrid approach makes it easy to support both constant bit-rate (CBR) and varaible bit-rate (VBR) coding. Although the SILK layer used is VBR, it is easy to make the bit allocation of the CELT layer produce a final stream that is CBR by using all the bits left unused by the SILK layer. </t> <t>The implementation of SILK-based LP layer is similar to the description in the <xref target="SILK">SILK Internet-Draft</xref> with the main exception that SILK was modified to use the same range coder as CELT. The implementation of the CELT-based MDCT layer is available from the CELT website and is a more recent version (0.8.1) of the <xref target="CELT">CELT Internet-Draft</xref>. The main changes include better support for 20 ms frames as well as the ability to encode only the higher bands using a range coder partially filled by the SILK layer.</t> <t> In addition to their frame size, the SILK and CELT codecs require a look-ahead of 5.2 ms and 2.5 ms, respectively. SILK's look-ahead is due to noise shaping estimation (5 ms) and the internal resampling (0.2 ms), while CELT's look-ahead is due to the overlapping MDCT windows. To compensate for the difference, the CELT encoder input is delayed by 2.7 ms. This ensures that low frequencies and high frequencies arrive at the same time. </t> <section title="Source Code"> <t> The source code is currently available in a <eref target='git://git.xiph.org/users/jm/ietfcodec.git'>Git repository</eref> which references two other repositories (for SILK and CELT). Some snapshots are provided for convenience at <eref target='http://people.xiph.org/~jm/ietfcodec/'/> along with sample files. Although the build system is very primitive, some instructions are provided in the toplevel README file. This is very early development so both the quality and feature set should greatly improve over time. In the current version, only 48 kHz audio is supported, but support for all configurations listed in <xref target="modes"></xref> is planned. </t> </section> </section> <section anchor="modes" title="Codec Modes"> <t> There are three possible operating modes for the proposed prototype: <list style="numbers"> <t>A linear prediction (LP) mode for use in low bit-rate connections with up to 8 kHz audio bandwidth (16 kHz sampling rate)</t> <t>A hybrid (LP+MDCT) mode for full-bandwidth speech at medium bitrates</t> <t>An MDCT-only mode for very low delay speech transmission as well as music transmission.</t> </list> Each of these modes supports a number of difference frame sizes and sampling rates. In order to distinguish between the various modes and configurations, we define a single-byte table-of-contents (TOC) header that can used in the transport layer (e.g RTP) to signal this information. The following describes the proposed TOC byte. </t> <t> The LP mode supports the following configurations (numbered from 0 to 11): <list style="symbols"> <t>8 kHz: 10, 20, 40, 60 ms (0..3)</t> <t>12 kHz: 10, 20, 40, 60 ms (4..7)</t> <t>16 kHz: 10, 20, 40, 60 ms (8..11)</t> </list> for a total of 12 configurations. </t> <t> The hybrid mode supports the following configurations (numbered from 12 to 15): <list style="symbols"> <t>32 kHz: 10, 20 ms (12..13)</t> <t>48 kHz: 10, 20 ms (14..15)</t> </list> for a total of 4 configurations. </t> <t> The MDCT-only mode supports the following configurations (numbered from 16 to 31): <list style="symbols"> <t>8 kHz: 2.5, 5, 10, 20 ms (16..19)</t> <t>16 kHz: 2.5, 5, 10, 20 ms (20..23)</t> <t>32 kHz: 2.5, 5, 10, 20 ms (24..27)</t> <t>48 kHz: 2.5, 5, 10, 20 ms (28..31)</t> </list> for a total of 16 configurations. </t> <t> There is thus a total of 32 configurations, encoded in 5 bits. On bit is used to signal mono vs stereo, which leaves 2 bits for the number of frames per packets (codes 0 to 3): <list style="symbols"> <t>0: 1 frames in the packet</t> <t>1: 2 frames in the packet, each with equal compressed size</t> <t>2: 2 frames in the packet, with different compressed size</t> <t>3: arbitrary number of frames in the packet</t> </list> For code 2, the TOC byte is followed by the length of the first frame, encoded as described below. For code 3, the TOC byte is followed by a byte encoding the number of frames in the packet, with the MSB indicating VBR. In the VBR case, the byte indicating the number of frames is followed by N-1 frame lengths encoded as described below. As an additional limit, the audio duration contained within a packet may not exceed 120 ms. </t> <t> The compressed size of the frames (if needed) is indicated -- usually -- with one byte, with the following meaning: <list style="symbols"> <t>0: No frame (DTX or lost packet)</t> <t>1-251: Size of the frame in bytes</t> <t>252-255: A second byte is needed. The total size is (size[1]*4)+size[0]</t> </list> </t> <t> The maximum size representable is 255*4+255=1275 bytes. For 20 ms frames, that represents a bit-rate of 510 kb/s, which is really the highest rate anyone would want to use in stereo mode (beyond that point, lossless codecs would be more appropriate). </t> <section anchor="examples" title="Examples"> <t> Simplest case: one narrowband mono 20-ms SILK frame </t> <t> <figure> <artwork><![CDATA[ 0 1 2 3 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | 1 |0|0|0| compressed data... | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ ]]></artwork> </figure> </t> <t> Two 48 kHz mono 5 ms CELT frames of the same compressed size: </t> <t> <figure> <artwork><![CDATA[ 0 1 2 3 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | 29 |0|0|1| compressed data... | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ ]]></artwork> </figure> </t> <t> Two 48 kHz mono 20-ms hybrid frames of different compressed size: </t> <t> <figure> <artwork><![CDATA[ 0 1 2 3 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | 15 |0|1|1| 2 | frame size |compressed data| +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | compressed data... | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ ]]></artwork> </figure> </t> <t> Four 48 kHz stereo 20-ms CELT frame of the same compressed size: </t> <t> <figure> <artwork><![CDATA[ 0 1 2 3 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | 31 |1|1|0| 4 | compressed data... | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ ]]></artwork> </figure> </t> </section> </section> <section title="Codec Encoder"> <t> Opus encoder block diagram. </t> <section anchor="range-encoder" title="Range Coder"> <t> Opus uses an entropy coder based upon <xref target="range-coding"></xref>, which is itself a rediscovery of the FIFO arithmetic code introduced by <xref target="coding-thesis"></xref>. It is very similar to arithmetic encoding, except that encoding is done with digits in any base instead of with bits, so it is faster when using larger bases (i.e.: an octet). All of the calculations in the range coder must use bit-exact integer arithmetic. </t> <t> The range coder also acts as the bit-packer for Opus. It is used in three different ways, to encode: <list style="symbols"> <t>entropy-coded symbols with a fixed probability model using ec_encode(), (rangeenc.c)</t> <t>integers from 0 to 2^M-1 using ec_enc_uint() or ec_enc_bits(), (entenc.c)</t> <t>integers from 0 to N-1 (where N is not a power of two) using ec_enc_uint(). (entenc.c)</t> </list> </t> <t> The range encoder maintains an internal state vector composed of the four-tuple (low,rng,rem,ext), representing the low end of the current range, the size of the current range, a single buffered output octet, and a count of additional carry-propagating output octets. Both rng and low are 32-bit unsigned integer values, rem is an octet value or the special value -1, and ext is an integer with at least 16 bits. This state vector is initialized at the start of each each frame to the value (0,2^31,-1,0). </t> <t> Each symbol is drawn from a finite alphabet and coded in a separate context which describes the size of the alphabet and the relative frequency of each symbol in that alphabet. Opus only uses static contexts; they are not adapted to the statistics of the data that is coded. </t> <section anchor="encoding-symbols" title="Encoding Symbols"> <t> The main encoding function is ec_encode() (rangeenc.c), which takes as an argument a three-tuple (fl,fh,ft) describing the range of the symbol to be encoded in the current context, with 0 <= fl < fh <= ft <= 65535. The values of this tuple are derived from the probability model for the symbol. Let f(i) be the frequency of the ith symbol in the current context. Then the three-tuple corresponding to the kth symbol is given by <![CDATA[ fl=sum(f(i),i<k), fh=fl+f(i), and ft=sum(f(i)). ]]> </t> <t> ec_encode() updates the state of the encoder as follows. If fl is greater than zero, then low = low + rng - (rng/ft)*(ft-fl) and rng = (rng/ft)*(fh-fl). Otherwise, low is unchanged and rng = rng - (rng/ft)*(fh-fl). The divisions here are exact integer division. After this update, the range is normalized. </t> <t> To normalize the range, the following process is repeated until rng > 2^23. First, the top 9 bits of low, (low>>23), are placed into a carry buffer. Then, low is set to <![CDATA[(low << 8 & 0x7FFFFFFF) and rng is set to (rng<<8)]]>. This process is carried out by ec_enc_normalize() (rangeenc.c). </t> <t> The 9 bits produced in each iteration of the normalization loop consist of 8 data bits and a carry flag. The final value of the output bits is not determined until carry propagation is accounted for. Therefore the reference implementation buffers a single (non-propagating) output octet and keeps a count of additional propagating (0xFF) output octets. An implementation MAY choose to use any mathematically equivalent scheme to perform carry propagation. </t> <t> The function ec_enc_carry_out() (rangeenc.c) performs this buffering. It takes a 9-bit input value, c, from the normalization 8-bit output and a carry bit. If c is 0xFF, then ext is incremented and no octets are output. Otherwise, if rem is not the special value -1, then the octet (rem+(c>>8)) is output. Then ext octets are output with the value 0 if the carry bit is set, or 0xFF if it is not, and rem is set to the lower 8 bits of c. After this, ext is set to zero. </t> <t> In the reference implementation, a special version of ec_encode() called ec_encode_bin() (rangeenc.c) is defined to take a two-tuple (fl,ftb), where <![CDATA[0 <= fl < 2^ftb and ftb < 16. It is mathematically equivalent to calling ec_encode() with the three-tuple (fl,fl+1,1<<ftb)]]>, but avoids using division. </t> </section> <section anchor="encoding-ints" title="Encoding Uniformly Distributed Integers"> <t> Functions ec_enc_uint() or ec_enc_bits() are based on ec_encode() and encode one of N equiprobable symbols, each with a frequency of 1, where N may be as large as 2^32-1. Because ec_encode() is limited to a total frequency of 2^16-1, this is done by encoding a series of symbols in smaller contexts. </t> <t> ec_enc_bits() (entenc.c) is defined, like ec_encode_bin(), to take a two-tuple (fl,ftb), with <![CDATA[0 <= fl < 2^ftb and ftb < 32. While ftb is greater than 8, it encodes bits (ftb-8) to (ftb-1) of fl, e.g., (fl>>ftb-8&0xFF) using ec_encode_bin() and subtracts 8 from ftb. Then, it encodes the remaining bits of fl, e.g., (fl&(1<<ftb)-1)]]>, again using ec_encode_bin(). </t> <t> ec_enc_uint() (entenc.c) takes a two-tuple (fl,ft), where ft is not necessarily a power of two. Let ftb be the location of the highest 1 bit in the two's-complement representation of (ft-1), or -1 if no bits are set. If ftb>8, then the top 8 bits of fl are encoded using ec_encode() with the three-tuple (fl>>ftb-8,(fl>>ftb-8)+1,(ft-1>>ftb-8)+1), and the remaining bits are encoded with ec_enc_bits using the two-tuple <![CDATA[(fl&(1<<ftb-8)-1,ftb-8). Otherwise, fl is encoded with ec_encode() directly using the three-tuple (fl,fl+1,ft)]]>. </t> </section> <section anchor="encoder-finalizing" title="Finalizing the Stream"> <t> After all symbols are encoded, the stream must be finalized by outputting a value inside the current range. Let end be the integer in the interval [low,low+rng) with the largest number of trailing zero bits. Then while end is not zero, the top 9 bits of end, e.g., <![CDATA[(end>>23), are sent to the carry buffer, and end is replaced by (end<<8&0x7FFFFFFF). Finally, if the value in carry buffer, rem, is]]> neither zero nor the special value -1, or the carry count, ext, is greater than zero, then 9 zero bits are sent to the carry buffer. After the carry buffer is finished outputting octets, the rest of the output buffer is padded with zero octets. Finally, rem is set to the special value -1. This process is implemented by ec_enc_done() (rangeenc.c). </t> </section> <section anchor="encoder-tell" title="Current Bit Usage"> <t> The bit allocation routines in Opus need to be able to determine a conservative upper bound on the number of bits that have been used to encode the current frame thus far. This drives allocation decisions and ensures that the range code will not overflow the output buffer. This is computed in the reference implementation to fractional bit precision by the function ec_enc_tell() (rangeenc.c). Like all operations in the range encoder, it must be implemented in a bit-exact manner. </t> </section> </section> <section title='SILK Encoder'> <t> In the following, we focus on the core encoder and describe its components. For simplicity, we will refer to the core encoder simply as the encoder in the remainder of this document. An overview of the encoder is given in <xref target="encoder_figure" />. </t> <figure align="center" anchor="encoder_figure"> <artwork align="center"> <![CDATA[ +---+ +----------------------------->| | +---------+ | +---------+ | | |Voice | | |LTP | | | +----->|Activity |-----+ +---->|Scaling |---------+--->| | | |Detector | 3 | | |Control |<+ 12 | | | | +---------+ | | +---------+ | | | | | | | +---------+ | | | | | | | |Gains | | 11 | | | | | | +->|Processor|-|---+---|--->| R | | | | | | | | | | | a | | \/ | | +---------+ | | | | n | | +---------+ | | +---------+ | | | | g | | |Pitch | | | |LSF | | | | | e | | +->|Analysis |-+ | |Quantizer|-|---|---|--->| | | | | |4| | | | | 8 | | | E |-> | | +---------+ | | +---------+ | | | | n |14 | | | | 9/\ 10| | | | | c | | | | | | \/ | | | | o | | | +---------+ | | +----------+| | | | d | | | |Noise | +--|->|Prediction|+---|---|--->| e | | +->|Shaping |-|--+ |Analysis || 7 | | | r | | | |Analysis |5| | | || | | | | | | +---------+ | | +----------+| | | | | | | | | /\ | | | | | | | +---------|--|-------+ | | | | | | | | \/ \/ \/ \/ \/ | | | +---------+ | | +---------+ +------------+ | | | |High-Pass| | | | | |Noise | | | -+->|Filter |-+----+----->|Prefilter|------>|Shaping |->| | 1 | | 2 | | 6 |Quantization|13| | +---------+ +---------+ +------------+ +---+ 1: Input speech signal 2: High passed input signal 3: Voice activity estimate 4: Pitch lags (per 5 ms) and voicing decision (per 20 ms) 5: Noise shaping quantization coefficients - Short term synthesis and analysis noise shaping coefficients (per 5 ms) - Long term synthesis and analysis noise shaping coefficients (per 5 ms and for voiced speech only) - Noise shaping tilt (per 5 ms) - Quantizer gain/step size (per 5 ms) 6: Input signal filtered with analysis noise shaping filters 7: Short and long term prediction coefficients LTP (per 5 ms) and LPC (per 20 ms) 8: LSF quantization indices 9: LSF coefficients 10: Quantized LSF coefficients 11: Processed gains, and synthesis noise shape coefficients 12: LTP state scaling coefficient. Controlling error propagation / prediction gain trade-off 13: Quantized signal 14: Range encoded bitstream ]]> </artwork> <postamble>Encoder block diagram.</postamble> </figure> <section title='Voice Activity Detection'> <t> The input signal is processed by a VAD (Voice Activity Detector) to produce a measure of voice activity, and also spectral tilt and signal-to-noise estimates, for each frame. The VAD uses a sequence of half-band filterbanks to split the signal in four subbands: 0 - Fs/16, Fs/16 - Fs/8, Fs/8 - Fs/4, and Fs/4 - Fs/2, where Fs is the sampling frequency, that is, 8, 12, 16 or 24 kHz. The lowest subband, from 0 - Fs/16 is high-pass filtered with a first-order MA (Moving Average) filter (with transfer function H(z) = 1-z^(-1)) to reduce the energy at the lowest frequencies. For each frame, the signal energy per subband is computed. In each subband, a noise level estimator tracks the background noise level and an SNR (Signal-to-Noise Ratio) value is computed as the logarithm of the ratio of energy to noise level. Using these intermediate variables, the following parameters are calculated for use in other SILK modules: <list style="symbols"> <t> Average SNR. The average of the subband SNR values. </t> <t> Smoothed subband SNRs. Temporally smoothed subband SNR values. </t> <t> Speech activity level. Based on the average SNR and a weighted average of the subband energies. </t> <t> Spectral tilt. A weighted average of the subband SNRs, with positive weights for the low subbands and negative weights for the high subbands. </t> </list> </t> </section> <section title='High-Pass Filter'> <t> The input signal is filtered by a high-pass filter to remove the lowest part of the spectrum that contains little speech energy and may contain background noise. This is a second order ARMA (Auto Regressive Moving Average) filter with a cut-off frequency around 70 Hz. </t> <t> In the future, a music detector may also be used to lower the cut-off frequency when the input signal is detected to be music rather than speech. </t> </section> <section title='Pitch Analysis' anchor='pitch_estimator_overview_section'> <t> The high-passed input signal is processed by the open loop pitch estimator shown in <xref target='pitch_estimator_figure' />. <figure align="center" anchor="pitch_estimator_figure"> <artwork align="center"> <![CDATA[ +--------+ +----------+ |2 x Down| |Time- | +->|sampling|->|Correlator| | | | | | | |4 | +--------+ +----------+ \/ | | 2 +-------+ | | +-->|Speech |5 +---------+ +--------+ | \/ | |Type |-> |LPC | |Down | | +----------+ | | +->|Analysis | +->|sample |-+------------->|Time- | +-------+ | | | | |to 8 kHz| |Correlator|-----------> | +---------+ | +--------+ |__________| 6 | | | |3 | \/ | \/ | +---------+ | +----------+ | |Whitening| | |Time- | -+->|Filter |-+--------------------------->|Correlator|-----------> 1 | | | | 7 +---------+ +----------+ 1: Input signal 2: Lag candidates from stage 1 3: Lag candidates from stage 2 4: Correlation threshold 5: Voiced/unvoiced flag 6: Pitch correlation 7: Pitch lags ]]> </artwork> <postamble>Block diagram of the pitch estimator.</postamble> </figure> The pitch analysis finds a binary voiced/unvoiced classification, and, for frames classified as voiced, four pitch lags per frame - one for each 5 ms subframe - and a pitch correlation indicating the periodicity of the signal. The input is first whitened using a Linear Prediction (LP) whitening filter, where the coefficients are computed through standard Linear Prediction Coding (LPC) analysis. The order of the whitening filter is 16 for best results, but is reduced to 12 for medium complexity and 8 for low complexity modes. The whitened signal is analyzed to find pitch lags for which the time correlation is high. The analysis consists of three stages for reducing the complexity: <list style="symbols"> <t>In the first stage, the whitened signal is downsampled to 4 kHz (from 8 kHz) and the current frame is correlated to a signal delayed by a range of lags, starting from a shortest lag corresponding to 500 Hz, to a longest lag corresponding to 56 Hz.</t> <t> The second stage operates on a 8 kHz signal ( downsampled from 12, 16 or 24 kHz ) and measures time correlations only near the lags corresponding to those that had sufficiently high correlations in the first stage. The resulting correlations are adjusted for a small bias towards short lags to avoid ending up with a multiple of the true pitch lag. The highest adjusted correlation is compared to a threshold depending on: <list style="symbols"> <t> Whether the previous frame was classified as voiced </t> <t> The speech activity level </t> <t> The spectral tilt. </t> </list> If the threshold is exceeded, the current frame is classified as voiced and the lag with the highest adjusted correlation is stored for a final pitch analysis of the highest precision in the third stage. </t> <t> The last stage operates directly on the whitened input signal to compute time correlations for each of the four subframes independently in a narrow range around the lag with highest correlation from the second stage. </t> </list> </t> </section> <section title='Noise Shaping Analysis' anchor='noise_shaping_analysis_overview_section'> <t> The noise shaping analysis finds gains and filter coefficients used in the prefilter and noise shaping quantizer. These parameters are chosen such that they will fulfil several requirements: <list style="symbols"> <t>Balancing quantization noise and bitrate. The quantization gains determine the step size between reconstruction levels of the excitation signal. Therefore, increasing the quantization gain amplifies quantization noise, but also reduces the bitrate by lowering the entropy of the quantization indices.</t> <t>Spectral shaping of the quantization noise; the noise shaping quantizer is capable of reducing quantization noise in some parts of the spectrum at the cost of increased noise in other parts without substantially changing the bitrate. By shaping the noise such that it follows the signal spectrum, it becomes less audible. In practice, best results are obtained by making the shape of the noise spectrum slightly flatter than the signal spectrum.</t> <t>Deemphasizing spectral valleys; by using different coefficients in the analysis and synthesis part of the prefilter and noise shaping quantizer, the levels of the spectral valleys can be decreased relative to the levels of the spectral peaks such as speech formants and harmonics. This reduces the entropy of the signal, which is the difference between the coded signal and the quantization noise, thus lowering the bitrate.</t> <t>Matching the levels of the decoded speech formants to the levels of the original speech formants; an adjustment gain and a first order tilt coefficient are computed to compensate for the effect of the noise shaping quantization on the level and spectral tilt.</t> </list> </t> <t> <figure align="center" anchor="noise_shape_analysis_spectra_figure"> <artwork align="center"> <![CDATA[ / \ ___ | // \\ | // \\ ____ |_// \\___// \\ ____ | / ___ \ / \\ // \\ P |/ / \ \_/ \\_____// \\ o | / \ ____ \ / \\ w | / \___/ \ \___/ ____ \\___ 1 e |/ \ / \ \ r | \_____/ \ \__ 2 | \ | \___ 3 | +----------------------------------------> Frequency 1: Input signal spectrum 2: Deemphasized and level matched spectrum 3: Quantization noise spectrum ]]> </artwork> <postamble>Noise shaping and spectral de-emphasis illustration.</postamble> </figure> <xref target='noise_shape_analysis_spectra_figure' /> shows an example of an input signal spectrum (1). After de-emphasis and level matching, the spectrum has deeper valleys (2). The quantization noise spectrum (3) more or less follows the input signal spectrum, while having slightly less pronounced peaks. The entropy, which provides a lower bound on the bitrate for encoding the excitation signal, is proportional to the area between the deemphasized spectrum (2) and the quantization noise spectrum (3). Without de-emphasis, the entropy is proportional to the area between input spectrum (1) and quantization noise (3) - clearly higher. </t> <t> The transformation from input signal to deemphasized signal can be described as a filtering operation with a filter <figure align="center"> <artwork align="center"> <![CDATA[ Wana(z) H(z) = G * ( 1 - c_tilt * z^(-1) ) * ------- Wsyn(z), ]]> </artwork> </figure> having an adjustment gain G, a first order tilt adjustment filter with tilt coefficient c_tilt, and where <figure align="center"> <artwork align="center"> <![CDATA[ 16 d __ __ Wana(z) = (1 - \ (a_ana(k) * z^(-k))*(1 - z^(-L) \ b_ana(k)*z^(-k)), /_ /_ k=1 k=-d ]]> </artwork> </figure> is the analysis part of the de-emphasis filter, consisting of the short-term shaping filter with coefficients a_ana(k), and the long-term shaping filter with coefficients b_ana(k) and pitch lag L. The parameter d determines the number of long-term shaping filter taps. </t> <t> Similarly, but without the tilt adjustment, the synthesis part can be written as <figure align="center"> <artwork align="center"> <![CDATA[ 16 d __ __ Wsyn(z) = (1 - \ (a_syn(k) * z^(-k))*(1 - z^(-L) \ b_syn(k)*z^(-k)). /_ /_ k=1 k=-d ]]> </artwork> </figure> </t> <t> All noise shaping parameters are computed and applied per subframe of 5 milliseconds. First, an LPC analysis is performed on a windowed signal block of 15 milliseconds. The signal block has a look-ahead of 5 milliseconds relative to the current subframe, and the window is an asymmetric sine window. The LPC analysis is done with the autocorrelation method, with an order of 16 for best quality or 12 in low complexity operation. The quantization gain is found as the square-root of the residual energy from the LPC analysis, multiplied by a value inversely proportional to the coding quality control parameter and the pitch correlation. </t> <t> Next we find the two sets of short-term noise shaping coefficients a_ana(k) and a_syn(k), by applying different amounts of bandwidth expansion to the coefficients found in the LPC analysis. This bandwidth expansion moves the roots of the LPC polynomial towards the origo, using the formulas <figure align="center"> <artwork align="center"> <![CDATA[ a_ana(k) = a(k)*g_ana^k, and a_syn(k) = a(k)*g_syn^k, ]]> </artwork> </figure> where a(k) is the k'th LPC coefficient and the bandwidth expansion factors g_ana and g_syn are calculated as <figure align="center"> <artwork align="center"> <![CDATA[ g_ana = 0.94 - 0.02*C, and g_syn = 0.94 + 0.02*C, ]]> </artwork> </figure> where C is the coding quality control parameter between 0 and 1. Applying more bandwidth expansion to the analysis part than to the synthesis part gives the desired de-emphasis of spectral valleys in between formants. </t> <t> The long-term shaping is applied only during voiced frames. It uses three filter taps, described by <figure align="center"> <artwork align="center"> <![CDATA[ b_ana = F_ana * [0.25, 0.5, 0.25], and b_syn = F_syn * [0.25, 0.5, 0.25]. ]]> </artwork> </figure> For unvoiced frames these coefficients are set to 0. The multiplication factors F_ana and F_syn are chosen between 0 and 1, depending on the coding quality control parameter, as well as the calculated pitch correlation and smoothed subband SNR of the lowest subband. By having F_ana less than F_syn, the pitch harmonics are emphasized relative to the valleys in between the harmonics. </t> <t> The tilt coefficient c_tilt is for unvoiced frames chosen as <figure align="center"> <artwork align="center"> <![CDATA[ c_tilt = 0.4, and as c_tilt = 0.04 + 0.06 * C ]]> </artwork> </figure> for voiced frames, where C again is the coding quality control parameter and is between 0 and 1. </t> <t> The adjustment gain G serves to correct any level mismatch between original and decoded signal that might arise from the noise shaping and de-emphasis. This gain is computed as the ratio of the prediction gain of the short-term analysis and synthesis filter coefficients. The prediction gain of an LPC synthesis filter is the square-root of the output energy when the filter is excited by a unit-energy impulse on the input. An efficient way to compute the prediction gain is by first computing the reflection coefficients from the LPC coefficients through the step-down algorithm, and extracting the prediction gain from the reflection coefficients as <figure align="center"> <artwork align="center"> <![CDATA[ K ___ predGain = ( | | 1 - (r_k)^2 )^(-0.5), k=1 ]]> </artwork> </figure> where r_k is the k'th reflection coefficient. </t> <t> Initial values for the quantization gains are computed as the square-root of the residual energy of the LPC analysis, adjusted by the coding quality control parameter. These quantization gains are later adjusted based on the results of the prediction analysis. </t> </section> <section title='Prefilter'> <t> In the prefilter the input signal is filtered using the spectral valley de-emphasis filter coefficients from the noise shaping analysis, see <xref target='noise_shaping_analysis_overview_section' />. By applying only the noise shaping analysis filter to the input signal, it provides the input to the noise shaping quantizer. </t> </section> <section title='Prediction Analysis' anchor='pred_ana_overview_section'> <t> The prediction analysis is performed in one of two ways depending on how the pitch estimator classified the frame. The processing for voiced and unvoiced speech are described in <xref target='pred_ana_voiced_overview_section' /> and <xref target='pred_ana_unvoiced_overview_section' />, respectively. Inputs to this function include the pre-whitened signal from the pitch estimator, see <xref target='pitch_estimator_overview_section' />. </t> <section title='Voiced Speech' anchor='pred_ana_voiced_overview_section'> <t> For a frame of voiced speech the pitch pulses will remain dominant in the pre-whitened input signal. Further whitening is desirable as it leads to higher quality at the same available bit-rate. To achieve this, a Long-Term Prediction (LTP) analysis is carried out to estimate the coefficients of a fifth order LTP filter for each of four sub-frames. The LTP coefficients are used to find an LTP residual signal with the simulated output signal as input to obtain better modelling of the output signal. This LTP residual signal is the input to an LPC analysis where the LPCs are estimated using Burgs method, such that the residual energy is minimized. The estimated LPCs are converted to a Line Spectral Frequency (LSF) vector, and quantized as described in <xref target='lsf_quantizer_overview_section' />. After quantization, the quantized LSF vector is converted to LPC coefficients and hence by using these quantized coefficients the encoder remains fully synchronized with the decoder. The LTP coefficients are quantized using a method described in <xref target='ltp_quantizer_overview_section' />. The quantized LPC and LTP coefficients are now used to filter the high-pass filtered input signal and measure a residual energy for each of the four subframes. </t> </section> <section title='Unvoiced Speech' anchor='pred_ana_unvoiced_overview_section'> <t> For a speech signal that has been classified as unvoiced there is no need for LTP filtering as it has already been determined that the pre-whitened input signal is not periodic enough within the allowed pitch period range for an LTP analysis to be worth-while the cost in terms of complexity and rate. Therefore, the pre-whitened input signal is discarded and instead the high-pass filtered input signal is used for LPC analysis using Burgs method. The resulting LPC coefficients are converted to an LSF vector, quantized as described in the following section and transformed back to obtain quantized LPC coefficients. The quantized LPC coefficients are used to filter the high-pass filtered input signal and measure a residual energy for each of the four subframes. </t> </section> </section> <section title='LSF Quantization' anchor='lsf_quantizer_overview_section'> <t>The purpose of quantization in general is to significantly lower the bit rate at the cost of some introduced distortion. A higher rate should always result in lower distortion, and lowering the rate will generally lead to higher distortion. A commonly used but generally sub-optimal approach is to use a quantization method with a constant rate where only the error is minimized when quantizing.</t> <section title='Rate-Distortion Optimization'> <t>Instead, we minimize an objective function that consists of a weighted sum of rate and distortion, and use a codebook with an associated non-uniform rate table. Thus, we take into account that the probability mass function for selecting the codebook entries are by no means guaranteed to be uniform in our scenario. The advantage of this approach is that it ensures that rarely used codebook vector centroids, which are modelling statistical outliers in the training set can be quantized with a low error but with a relatively high cost in terms of a high rate. At the same time this approach also provides the advantage that frequently used centroids are modelled with low error and a relatively low rate. This approach will lead to equal or lower distortion than the fixed rate codebook at any given average rate, provided that the data is similar to the data used for training the codebook.</t> </section> <section title='Error Mapping' anchor='lsf_error_mapping_overview_section'> <t> Instead of minimizing the error in the LSF domain, we map the errors to better approximate spectral distortion by applying an individual weight to each element in the error vector. The weight vectors are calculated for each input vector using the Inverse Harmonic Mean Weighting (IHMW) function proposed by Laroia et al., see <xref target="laroia-icassp" />. Consequently, we solve the following minimization problem, i.e., <figure align="center"> <artwork align="center"> <![CDATA[ LSF_q = argmin { (LSF - c)' * W * (LSF - c) + mu * rate }, c in C ]]> </artwork> </figure> where LSF_q is the quantized vector, LSF is the input vector to be quantized, and c is the quantized LSF vector candidate taken from the set C of all possible outcomes of the codebook. </t> </section> <section title='Multi-Stage Vector Codebook'> <t> We arrange the codebook in a multiple stage structure to achieve a quantizer that is both memory efficient and highly scalable in terms of computational complexity, see e.g. <xref target="sinervo-norsig" />. In the first stage the input is the LSF vector to be quantized, and in any other stage s > 1, the input is the quantization error from the previous stage, see <xref target='lsf_quantizer_structure_overview_figure' />. <figure align="center" anchor="lsf_quantizer_structure_overview_figure"> <artwork align="center"> <![CDATA[ Stage 1: Stage 2: Stage S: +----------+ +----------+ +----------+ | c_{1,1} | | c_{2,1} | | c_{S,1} | LSF +----------+ res_1 +----------+ res_{S-1} +----------+ --->| c_{1,2} |------>| c_{2,2} |--> ... --->| c_{S,2} |---> +----------+ +----------+ +----------+ res_S = ... ... ... LSF-LSF_q +----------+ +----------+ +----------+ |c_{1,M1-1}| |c_{2,M2-1}| |c_{S,MS-1}| +----------+ +----------+ +----------+ | c_{1,M1} | | c_{2,M2} | | c_{S,MS} | +----------+ +----------+ +----------+ ]]> </artwork> <postamble>Multi-Stage LSF Vector Codebook Structure.</postamble> </figure> </t> <t> By storing total of M codebook vectors, i.e., <figure align="center"> <artwork align="center"> <![CDATA[ S __ M = \ Ms, /_ s=1 ]]> </artwork> </figure> where M_s is the number of vectors in stage s, we obtain a total of <figure align="center"> <artwork align="center"> <![CDATA[ S ___ T = | | Ms s=1 ]]> </artwork> </figure> possible combinations for generating the quantized vector. It is for example possible to represent 2^36 uniquely combined vectors using only 216 vectors in memory, as done in SILK for voiced speech at all sample frequencies above 8 kHz. </t> </section> <section title='Survivor Based Codebook Search'> <t> This number of possible combinations is far too high for a full search to be carried out for each frame so for all stages but the last, i.e., s smaller than S, only the best min( L, Ms ) centroids are carried over to stage s+1. In each stage the objective function, i.e., the weighted sum of accumulated bit-rate and distortion, is evaluated for each codebook vector entry and the results are sorted. Only the best paths and the corresponding quantization errors are considered in the next stage. In the last stage S the single best path through the multistage codebook is determined. By varying the maximum number of survivors from each stage to the next L, the complexity can be adjusted in real-time at the cost of a potential increase when evaluating the objective function for the resulting quantized vector. This approach scales all the way between the two extremes, L=1 being a greedy search, and the desirable but infeasible full search, L=T/MS. In fact, a performance almost as good as what can be achieved with the infeasible full search can be obtained at a substantially lower complexity by using this approach, see e.g. <xref target='leblanc-tsap' />. </t> </section> <section title='LSF Stabilization' anchor='lsf_stabilizer_overview_section'> <t>If the input is stable, finding the best candidate will usually result in the quantized vector also being stable, but due to the multi-stage approach it could in theory happen that the best quantization candidate is unstable and because of this there is a need to explicitly ensure that the quantized vectors are stable. Therefore we apply a LSF stabilization method which ensures that the LSF parameters are within valid range, increasingly sorted, and have minimum distances between each other and the border values that have been pre-determined as the 0.01 percentile distance values from a large training set.</t> </section> <section title='Off-Line Codebook Training'> <t> The vectors and rate tables for the multi-stage codebook have been trained by minimizing the average of the objective function for LSF vectors from a large training set. </t> </section> </section> <section title='LTP Quantization' anchor='ltp_quantizer_overview_section'> <t> For voiced frames, the prediction analysis described in <xref target='pred_ana_voiced_overview_section' /> resulted in four sets (one set per subframe) of five LTP coefficients, plus four weighting matrices. Also, the LTP coefficients for each subframe are quantized using entropy constrained vector quantization. A total of three vector codebooks are available for quantization, with different rate-distortion trade-offs. The three codebooks have 10, 20 and 40 vectors and average rates of about 3, 4, and 5 bits per vector, respectively. Consequently, the first codebook has larger average quantization distortion at a lower rate, whereas the last codebook has smaller average quantization distortion at a higher rate. Given the weighting matrix W_ltp and LTP vector b, the weighted rate-distortion measure for a codebook vector cb_i with rate r_i is give by <figure align="center"> <artwork align="center"> <![CDATA[ RD = u * (b - cb_i)' * W_ltp * (b - cb_i) + r_i, ]]> </artwork> </figure> where u is a fixed, heuristically-determined parameter balancing the distortion and rate. Which codebook gives the best performance for a given LTP vector depends on the weighting matrix for that LTP vector. For example, for a low valued W_ltp, it is advantageous to use the codebook with 10 vectors as it has a lower average rate. For a large W_ltp, on the other hand, it is often better to use the codebook with 40 vectors, as it is more likely to contain the best codebook vector. The weighting matrix W_ltp depends mostly on two aspects of the input signal. The first is the periodicity of the signal; the more periodic the larger W_ltp. The second is the change in signal energy in the current subframe, relative to the signal one pitch lag earlier. A decaying energy leads to a larger W_ltp than an increasing energy. Both aspects do not fluctuate very fast which causes the W_ltp matrices for different subframes of one frame often to be similar. As a result, one of the three codebooks typically gives good performance for all subframes. Therefore the codebook search for the subframe LTP vectors is constrained to only allow codebook vectors to be chosen from the same codebook, resulting in a rate reduction. </t> <t> To find the best codebook, each of the three vector codebooks is used to quantize all subframe LTP vectors and produce a combined weighted rate-distortion measure for each vector codebook and the vector codebook with the lowest combined rate-distortion over all subframes is chosen. The quantized LTP vectors are used in the noise shaping quantizer, and the index of the codebook plus the four indices for the four subframe codebook vectors are passed on to the range encoder. </t> </section> <section title='Noise Shaping Quantizer'> <t> The noise shaping quantizer independently shapes the signal and coding noise spectra to obtain a perceptually higher quality at the same bitrate. </t> <t> The prefilter output signal is multiplied with a compensation gain G computed in the noise shaping analysis. Then the output of a synthesis shaping filter is added, and the output of a prediction filter is subtracted to create a residual signal. The residual signal is multiplied by the inverse quantized quantization gain from the noise shaping analysis, and input to a scalar quantizer. The quantization indices of the scalar quantizer represent a signal of pulses that is input to the pyramid range encoder. The scalar quantizer also outputs a quantization signal, which is multiplied by the quantized quantization gain from the noise shaping analysis to create an excitation signal. The output of the prediction filter is added to the excitation signal to form the quantized output signal y(n). The quantized output signal y(n) is input to the synthesis shaping and prediction filters. </t> </section> <section title='Range Encoder'> <t> Range encoding is a well known method for entropy coding in which a bitstream sequence is continually updated with every new symbol, based on the probability for that symbol. It is similar to arithmetic coding but rather than being restricted to generating binary output symbols, it can generate symbols in any chosen number base. In SILK all side information is range encoded. Each quantized parameter has its own cumulative density function based on histograms for the quantization indices obtained by running a training database. </t> <section title='Bitstream Encoding Details'> <t> TBD. </t> </section> </section> </section> <section title="CELT Encoder"> <t> Copy from CELT draft. </t> <section anchor="forward-mdct" title="Forward MDCT"> <t>The MDCT implementation has no special characteristics. The input is a windowed signal (after pre-emphasis) of 2*N samples and the output is N frequency-domain samples. A <spanx style="emph">low-overlap</spanx> window is used to reduce the algorithmic delay. It is derived from a basic (full overlap) window that is the same as the one used in the Vorbis codec: W(n)=[sin(pi/2*sin(pi/2*(n+.5)/L))]^2. The low-overlap window is created by zero-padding the basic window and inserting ones in the middle, such that the resulting window still satisfies power complementarity. The MDCT is computed in mdct_forward() (mdct.c), which includes the windowing operation and a scaling of 2/N. </t> </section> <section anchor="normalization" title="Bands and Normalization"> <t> The MDCT output is divided into bands that are designed to match the ear's critical bands, with the exception that each band has to be at least 3 bins wide. For each band, the encoder computes the energy that will later be encoded. Each band is then normalized by the square root of the <spanx style="strong">non-quantized</spanx> energy, such that each band now forms a unit vector X. The energy and the normalization are computed by compute_band_energies() and normalise_bands() (bands.c), respectively. </t> </section> <section anchor="energy-quantization" title="Energy Envelope Quantization"> <t> It is important to quantize the energy with sufficient resolution because any energy quantization error cannot be compensated for at a later stage. Regardless of the resolution used for encoding the shape of a band, it is perceptually important to preserve the energy in each band. CELT uses a coarse-fine strategy for encoding the energy in the base-2 log domain, as implemented in quant_bands.c</t> <section anchor="coarse-energy" title="Coarse energy quantization"> <t> The coarse quantization of the energy uses a fixed resolution of 6 dB and is the only place where entropy coding is used. To minimize the bitrate, prediction is applied both in time (using the previous frame) and in frequency (using the previous bands). The 2-D z-transform of the prediction filter is: A(z_l, z_b)=(1-a*z_l^-1)*(1-z_b^-1)/(1-b*z_b^-1) where b is the band index and l is the frame index. The prediction coefficients are a=0.8 and b=0.7 when not using intra energy and a=b=0 when using intra energy. The time-domain prediction is based on the final fine quantization of the previous frame, while the frequency domain (within the current frame) prediction is based on coarse quantization only (because the fine quantization has not been computed yet). We approximate the ideal probability distribution of the prediction error using a Laplace distribution. The coarse energy quantization is performed by quant_coarse_energy() and quant_coarse_energy() (quant_bands.c). </t> <t> The Laplace distribution for each band is defined by a 16-bit (Q15) decay parameter. Thus, the value 0 has a frequency count of p[0]=2*(16384*(16384-decay)/(16384+decay)). The values +/- i each have a frequency count p[i] = (p[i-1]*decay)>>14. The value of p[i] is always rounded down (to avoid exceeding 32768 as the sum of all frequency counts), so it is possible for the sum to be less than 32768. In that case additional values with a frequency count of 1 are encoded. The signed values corresponding to symbols 0, 1, 2, 3, 4, ... are [0, +1, -1, +2, -2, ...]. The encoding of the Laplace-distributed values is implemented in ec_laplace_encode() (laplace.c). </t> <!-- FIXME: bit budget consideration --> </section> <!-- coarse energy --> <section anchor="fine-energy" title="Fine energy quantization"> <t> After the coarse energy quantization and encoding, the bit allocation is computed (<xref target="allocation"></xref>) and the number of bits to use for refining the energy quantization is determined for each band. Let B_i be the number of fine energy bits for band i; the refinement is an integer f in the range [0,2^B_i-1]. The mapping between f and the correction applied to the coarse energy is equal to (f+1/2)/2^B_i - 1/2. Fine energy quantization is implemented in quant_fine_energy() (quant_bands.c). </t> <t> If any bits are unused at the end of the encoding process, these bits are used to increase the resolution of the fine energy encoding in some bands. Priority is given to the bands for which the allocation (<xref target="allocation"></xref>) was rounded down. At the same level of priority, lower bands are encoded first. Refinement bits are added until there are no unused bits. This is implemented in quant_energy_finalise() (quant_bands.c). </t> </section> <!-- fine energy --> </section> <!-- Energy quant --> <section anchor="allocation" title="Bit Allocation"> <t>Bit allocation is performed based only on information available to both the encoder and decoder. The same calculations are performed in a bit-exact manner in both the encoder and decoder to ensure that the result is always exactly the same. Any mismatch would cause an error in the decoded output. The allocation is computed by compute_allocation() (rate.c), which is used in both the encoder and the decoder.</t> <t>For a given band, the bit allocation is nearly constant across frames that use the same number of bits for Q1, yielding a pre-defined signal-to-mask ratio (SMR) for each band. Because the bands each have a width of one Bark, this is equivalent to modeling the masking occurring within each critical band, while ignoring inter-band masking and tone-vs-noise characteristics. While this is not an optimal bit allocation, it provides good results without requiring the transmission of any allocation information. </t> <t> For every encoded or decoded frame, a target allocation must be computed using the projected allocation. In the reference implementation this is performed by compute_allocation() (rate.c). The target computation begins by calculating the available space as the number of whole bits which can be fit in the frame after Q1 is stored according to the range coder (ec_[enc/dec]_tell()) and then multiplying by 8. Then the two projected prototype allocations whose sums multiplied by 8 are nearest to that value are determined. These two projected prototype allocations are then interpolated by finding the highest integer interpolation coefficient in the range 0-8 such that the sum of the higher prototype times the coefficient, plus the sum of the lower prototype multiplied by the difference of 16 and the coefficient, is less than or equal to the available sixteenth-bits. The reference implementation performs this step using a binary search in interp_bits2pulses() (rate.c). The target allocation is the interpolation coefficient times the higher prototype, plus the lower prototype multiplied by the difference of 16 and the coefficient, for each of the CELT bands. </t> <t> Because the computed target will sometimes be somewhat smaller than the available space, the excess space is divided by the number of bands, and this amount is added equally to each band. Any remaining space is added to the target one sixteenth-bit at a time, starting from the first band. The new target now matches the available space, in sixteenth-bits, exactly. </t> <t> The allocation target is separated into a portion used for fine energy and a portion used for the Spherical Vector Quantizer (PVQ). The fine energy quantizer operates in whole-bit steps. For each band the number of bits per channel used for fine energy is calculated by 50 minus the log2_frac(), with 1/16 bit precision, of the number of MDCT bins in the band. That result is multiplied by the number of bins in the band and again by twice the number of channels, and then the value is set to zero if it is less than zero. Added to that result is 16 times the number of MDCT bins times the number of channels, and it is finally divided by 32 times the number of MDCT bins times the number of channels. If the result times the number of channels is greater than than the target divided by 16, the result is set to the target divided by the number of channels divided by 16. Then if the value is greater than 7 it is reset to 7 because a larger amount of fine energy resolution was determined not to be make an improvement in perceived quality. The resulting number of fine energy bits per channel is then multiplied by the number of channels and then by 16, and subtracted from the target allocation. This final target allocation is what is used for the PVQ. </t> </section> <section anchor="pitch-prediction" title="Pitch Prediction"> <t> This section needs to be updated. </t> </section> <section anchor="pvq" title="Spherical Vector Quantization"> <t>CELT uses a Pyramid Vector Quantization (PVQ) <xref target="PVQ"></xref> codebook for quantizing the details of the spectrum in each band that have not been predicted by the pitch predictor. The PVQ codebook consists of all sums of K signed pulses in a vector of N samples, where two pulses at the same position are required to have the same sign. Thus the codebook includes all integer codevectors y of N dimensions that satisfy sum(abs(y(j))) = K. </t> <t> In bands where neither pitch nor folding is used, the PVQ is used to encode the unit vector that results from the normalization in <xref target="normalization"></xref> directly. Given a PVQ codevector y, the unit vector X is obtained as X = y/||y||, where ||.|| denotes the L2 norm. </t> <section anchor="bits-pulses" title="Bits to Pulses"> <t> Although the allocation is performed in 1/16 bit units, the quantization requires an integer number of pulses K. To do this, the encoder searches for the value of K that produces the number of bits that is the nearest to the allocated value (rounding down if exactly half-way between two values), subject to not exceeding the total number of bits available. The computation is performed in 1/16 of bits using log2_frac() and ec_enc_tell(). The number of codebooks entries can be computed as explained in <xref target="cwrs-encoding"></xref>. The difference between the number of bits allocated and the number of bits used is accumulated to a <spanx style="emph">balance</spanx> (initialised to zero) that helps adjusting the allocation for the next bands. One third of the balance is subtracted from the bit allocation of the next band to help achieving the target allocation. The only exceptions are the band before the last and the last band, for which half the balance and the whole balance are subtracted, respectively. </t> </section> <section anchor="pvq-search" title="PVQ Search"> <t> The search for the best codevector y is performed by alg_quant() (vq.c). There are several possible approaches to the search with a tradeoff between quality and complexity. The method used in the reference implementation computes an initial codeword y1 by projecting the residual signal R = X - p' onto the codebook pyramid of K-1 pulses: </t> <t> y0 = round_towards_zero( (K-1) * R / sum(abs(R))) </t> <t> Depending on N, K and the input data, the initial codeword y0 may contain from 0 to K-1 non-zero values. All the remaining pulses, with the exception of the last one, are found iteratively with a greedy search that minimizes the normalized correlation between y and R: </t> <t> J = -R^T*y / ||y|| </t> <t> The search described above is considered to be a good trade-off between quality and computational cost. However, there are other possible ways to search the PVQ codebook and the implementors MAY use any other search methods. </t> </section> <section anchor="cwrs-encoding" title="Index Encoding"> <t> The best PVQ codeword is encoded as a uniformly-distributed integer value by encode_pulses() (cwrs.c). The codeword is converted to a unique index in the same way as specified in <xref target="PVQ"></xref>. The indexing is based on the calculation of V(N,K) (denoted N(L,K) in <xref target="PVQ"></xref>), which is the number of possible combinations of K pulses in N samples. The number of combinations can be computed recursively as V(N,K) = V(N+1,K) + V(N,K+1) + V(N+1,K+1), with V(N,0) = 1 and V(0,K) = 0, K != 0. There are many different ways to compute V(N,K), including pre-computed tables and direct use of the recursive formulation. The reference implementation applies the recursive formulation one line (or column) at a time to save on memory use, along with an alternate, univariate recurrence to initialise an arbitrary line, and direct polynomial solutions for small N. All of these methods are equivalent, and have different trade-offs in speed, memory usage, and code size. Implementations MAY use any methods they like, as long as they are equivalent to the mathematical definition. </t> <t> The indexing computations are performed using 32-bit unsigned integers. For large codebooks, 32-bit integers are not sufficient. Instead of using 64-bit integers (or more), the encoding is made slightly sub-optimal by splitting each band into two equal (or near-equal) vectors of size (N+1)/2 and N/2, respectively. The number of pulses in the first half, K1, is first encoded as an integer in the range [0,K]. Then, two codebooks are encoded with V((N+1)/2, K1) and V(N/2, K-K1). The split operation is performed recursively, in case one (or both) of the split vectors still requires more than 32 bits. For compatibility reasons, the handling of codebooks of more than 32 bits MUST be implemented with the splitting method, even if 64-bit arithmetic is available. </t> </section> </section> <section anchor="stereo" title="Stereo support"> <t> When encoding a stereo stream, some parameters are shared across the left and right channels, while others are transmitted separately for each channel, or jointly encoded. Only one copy of the flags for the features, transients and pitch (pitch period and gains) are transmitted. The coarse and fine energy parameters are transmitted separately for each channel. Both the coarse energy and fine energy (including the remaining fine bits at the end of the stream) have the left and right bands interleaved in the stream, with the left band encoded first. </t> <t> The main difference between mono and stereo coding is the PVQ coding of the normalized vectors. In stereo mode, a normalized mid-side (M-S) encoding is used. Let L and R be the normalized vector of a certain band for the left and right channels, respectively. The mid and side vectors are computed as M=L+R and S=L-R and no longer have unit norm. </t> <t> From M and S, an angular parameter theta=2/pi*atan2(||S||, ||M||) is computed. The theta parameter is converted to a Q14 fixed-point parameter itheta, which is quantized on a scale from 0 to 1 with an interval of 2^-qb, where qb = (b-2*(N-1)*(40-log2_frac(N,4)))/(32*(N-1)), b is the number of bits allocated to the band, and log2_frac() is defined in cwrs.c. From here on, the value of itheta MUST be treated in a bit-exact manner since both the encoder and decoder rely on it to infer the bit allocation. </t> <t> Let m=M/||M|| and s=S/||S||; m and s are separately encoded with the PVQ encoder described in <xref target="pvq"></xref>. The number of bits allocated to m and s depends on the value of itheta. The number of bits allocated to coding m is obtained by: </t> <t> <list> <t>imid = bitexact_cos(itheta);</t> <t>iside = bitexact_cos(16384-itheta);</t> <t>delta = (N-1)*(log2_frac(iside,6)-log2_frac(imid,6))>>2;</t> <t>qalloc = log2_frac((1<<qb)+1,4);</t> <t>mbits = (b-qalloc/2-delta)/2;</t> </list> </t> <t>where bitexact_cos() is a fixed-point cosine approximation that MUST be bit-exact with the reference implementation in mathops.h. The spectral folding operation is performed independently for the mid and side vectors.</t> </section> <section anchor="synthesis" title="Synthesis"> <t> After all the quantization is completed, the quantized energy is used along with the quantized normalized band data to resynthesize the MDCT spectrum. The inverse MDCT (<xref target="inverse-mdct"></xref>) and the weighted overlap-add are applied and the signal is stored in the <spanx style="emph">synthesis buffer</spanx> so it can be used for pitch prediction. The encoder MAY omit this step of the processing if it knows that it will not be using the pitch predictor for the next few frames. If the de-emphasis filter (<xref target="inverse-mdct"></xref>) is applied to this resynthesized signal, then the output will be the same (within numerical precision) as the decoder's output. </t> </section> <section anchor="vbr" title="Variable Bitrate (VBR)"> <t> Each CELT frame can be encoded in a different number of octets, making it possible to vary the bitrate at will. This property can be used to implement source-controlled variable bitrate (VBR). Support for VBR is OPTIONAL for the encoder, but a decoder MUST be prepared to decode a stream that changes its bit-rate dynamically. The method used to vary the bit-rate in VBR mode is left to the implementor, as long as each frame can be decoded by the reference decoder. </t> </section> </section> </section> <section title="Codec Decoder"> <t> Opus decoder block diagram. </t> <section anchor="range-decoder" title="Range Decoder"> <t> The range decoder extracts the symbols and integers encoded using the range encoder in <xref target="range-encoder"></xref>. The range decoder maintains an internal state vector composed of the two-tuple (dif,rng), representing the difference between the high end of the current range and the actual coded value, and the size of the current range, respectively. Both dif and rng are 32-bit unsigned integer values. rng is initialized to 2^7. dif is initialized to rng minus the top 7 bits of the first input octet. Then the range is immediately normalized, using the procedure described in the following section. </t> <section anchor="decoding-symbols" title="Decoding Symbols"> <t> Decoding symbols is a two-step process. The first step determines a value fs that lies within the range of some symbol in the current context. The second step updates the range decoder state with the three-tuple (fl,fh,ft) corresponding to that symbol, as defined in <xref target="encoding-symbols"></xref>. </t> <t> The first step is implemented by ec_decode() (rangedec.c), and computes fs = ft-min((dif-1)/(rng/ft)+1,ft), where ft is the sum of the frequency counts in the current context, as described in <xref target="encoding-symbols"></xref>. The divisions here are exact integer division. </t> <t> In the reference implementation, a special version of ec_decode() called ec_decode_bin() (rangeenc.c) is defined using the parameter ftb instead of ft. It is mathematically equivalent to calling ec_decode() with ft = (1<<ftb), but avoids one of the divisions. </t> <t> The decoder then identifies the symbol in the current context corresponding to fs; i.e., the one whose three-tuple (fl,fh,ft) satisfies fl <= fs < fh. This tuple is used to update the decoder state according to dif = dif - (rng/ft)*(ft-fh), and if fl is greater than zero, rng = (rng/ft)*(fh-fl), or otherwise rng = rng - (rng/ft)*(ft-fh). After this update, the range is normalized. </t> <t> To normalize the range, the following process is repeated until rng > 2^23. First, rng is set to (rng<8)&0xFFFFFFFF. Then the next 8 bits of input are read into sym, using the remaining bit from the previous input octet as the high bit of sym, and the top 7 bits of the next octet for the remaining bits of sym. If no more input octets remain, zero bits are used instead. Then, dif is set to (dif<<8)-sym&0xFFFFFFFF (i.e., using wrap-around if the subtraction overflows a 32-bit register). Finally, if dif is larger than 2^31, dif is then set to dif - 2^31. This process is carried out by ec_dec_normalize() (rangedec.c). </t> </section> <section anchor="decoding-ints" title="Decoding Uniformly Distributed Integers"> <t> Functions ec_dec_uint() or ec_dec_bits() are based on ec_decode() and decode one of N equiprobable symbols, each with a frequency of 1, where N may be as large as 2^32-1. Because ec_decode() is limited to a total frequency of 2^16-1, this is done by decoding a series of symbols in smaller contexts. </t> <t> ec_dec_bits() (entdec.c) is defined, like ec_decode_bin(), to take a single parameter ftb, with ftb < 32. and ftb < 32, and produces an ftb-bit decoded integer value, t, initialized to zero. While ftb is greater than 8, it decodes the next 8 most significant bits of the integer, s = ec_decode_bin(8), updates the decoder state with the 3-tuple (s,s+1,256), adds those bits to the current value of t, t = t<<8 | s, and subtracts 8 from ftb. Then it decodes the remaining bits of the integer, s = ec_decode_bin(ftb), updates the decoder state with the 3 tuple (s,s+1,1<<ftb), and adds those bits to the final values of t, t = t<<ftb | s. </t> <t> ec_dec_uint() (entdec.c) takes a single parameter, ft, which is not necessarily a power of two, and returns an integer, t, with a value between 0 and ft-1, inclusive, which is initialized to zero. Let ftb be the location of the highest 1 bit in the two's-complement representation of (ft-1), or -1 if no bits are set. If ftb>8, then the top 8 bits of t are decoded using t = ec_decode((ft-1>>ftb-8)+1), the decoder state is updated with the three-tuple (s,s+1,(ft-1>>ftb-8)+1), and the remaining bits are decoded with t = t<<ftb-8|ec_dec_bits(ftb-8). If, at this point, t >= ft, then the current frame is corrupt, and decoding should stop. If the original value of ftb was not greater than 8, then t is decoded with t = ec_decode(ft), and the decoder state is updated with the three-tuple (t,t+1,ft). </t> </section> <section anchor="decoder-tell" title="Current Bit Usage"> <t> The bit allocation routines in CELT need to be able to determine a conservative upper bound on the number of bits that have been used to decode from the current frame thus far. This drives allocation decisions which must match those made in the encoder. This is computed in the reference implementation to fractional bit precision by the function ec_dec_tell() (rangedec.c). Like all operations in the range decoder, it must be implemented in a bit-exact manner, and must produce exactly the same value returned by ec_enc_tell() after encoding the same symbols. </t> </section> </section> <section anchor='outline_decoder' title='SILK Decoder'> <t> At the receiving end, the received packets are by the range decoder split into a number of frames contained in the packet. Each of which contains the necessary information to reconstruct a 20 ms frame of the output signal. </t> <section title="Decoder Modules"> <t> An overview of the decoder is given in <xref target="decoder_figure" />. <figure align="center" anchor="decoder_figure"> <artwork align="center"> <![CDATA[ +---------+ +------------+ -->| Range |--->| Decode |---------------------------+ 1 | Decoder | 2 | Parameters |----------+ 5 | +---------+ +------------+ 4 | | 3 | | | \/ \/ \/ +------------+ +------------+ +------------+ | Generate |-->| LTP |-->| LPC |--> | Excitation | | Synthesis | | Synthesis | 6 +------------+ +------------+ +------------+ 1: Range encoded bitstream 2: Coded parameters 3: Pulses and gains 4: Pitch lags and LTP coefficients 5: LPC coefficients 6: Decoded signal ]]> </artwork> <postamble>Decoder block diagram.</postamble> </figure> </t> <section title='Range Decoder'> <t> The range decoder decodes the encoded parameters from the received bitstream. Output from this function includes the pulses and gains for the excitation signal generation, as well as LTP and LSF codebook indices, which are needed for decoding LTP and LPC coefficients needed for LTP and LPC synthesis filtering the excitation signal, respectively. </t> </section> <section title='Decode Parameters'> <t> Pulses and gains are decoded from the parameters that was decoded by the range decoder. </t> <t> When a voiced frame is decoded and LTP codebook selection and indices are received, LTP coefficients are decoded using the selected codebook by choosing the vector that corresponds to the given codebook index in that codebook. This is done for each of the four subframes. The LPC coefficients are decoded from the LSF codebook by first adding the chosen vectors, one vector from each stage of the codebook. The resulting LSF vector is stabilized using the same method that was used in the encoder, see <xref target='lsf_stabilizer_overview_section' />. The LSF coefficients are then converted to LPC coefficients, and passed on to the LPC synthesis filter. </t> </section> <section title='Generate Excitation'> <t> The pulses signal is multiplied with the quantization gain to create the excitation signal. </t> </section> <section title='LTP Synthesis'> <t> For voiced speech, the excitation signal e(n) is input to an LTP synthesis filter that will recreate the long term correlation that was removed in the LTP analysis filter and generate an LPC excitation signal e_LPC(n), according to <figure align="center"> <artwork align="center"> <![CDATA[ d __ e_LPC(n) = e(n) + \ e(n - L - i) * b_i, /_ i=-d ]]> </artwork> </figure> using the pitch lag L, and the decoded LTP coefficients b_i. For unvoiced speech, the output signal is simply a copy of the excitation signal, i.e., e_LPC(n) = e(n). </t> </section> <section title='LPC Synthesis'> <t> In a similar manner, the short-term correlation that was removed in the LPC analysis filter is recreated in the LPC synthesis filter. The LPC excitation signal e_LPC(n) is filtered using the LTP coefficients a_i, according to <figure align="center"> <artwork align="center"> <![CDATA[ d_LPC __ y(n) = e_LPC(n) + \ e_LPC(n - i) * a_i, /_ i=1 ]]> </artwork> </figure> where d_LPC is the LPC synthesis filter order, and y(n) is the decoded output signal. </t> </section> </section> </section> <section title="CELT Decoder"> <t> Insert decoder figure. </t> <t> The decoder extracts information from the range-coded bit-stream in the same order as it was encoded by the encoder. In some circumstances, it is possible for a decoded value to be out of range due to a very small amount of redundancy in the encoding of large integers by the range coder. In that case, the decoder should assume there has been an error in the coding, decoding, or transmission and SHOULD take measures to conceal the error and/or report to the application that a problem has occurred. </t> <section anchor="energy-decoding" title="Energy Envelope Decoding"> <t> The energy of each band is extracted from the bit-stream in two steps according to the same coarse-fine strategy used in the encoder. First, the coarse energy is decoded in unquant_coarse_energy() (quant_bands.c) based on the probability of the Laplace model used by the encoder. </t> <t> After the coarse energy is decoded, the same allocation function as used in the encoder is called. This determines the number of bits to decode for the fine energy quantization. The decoding of the fine energy bits is performed by unquant_fine_energy() (quant_bands.c). Finally, like the encoder, the remaining bits in the stream (that would otherwise go unused) are decoded using unquant_energy_finalise() (quant_bands.c). </t> </section> <section anchor="pitch-decoding" title="Pitch prediction decoding"> <t> If the pitch bit is set, then the pitch period is extracted from the bit-stream. The pitch gain bits are extracted within the PVQ decoding as encoded by the encoder. When the folding bit is set, the folding prediction is computed in exactly the same way as the encoder, with the same gain, by the function intra_fold() (vq.c). </t> </section> <section anchor="PVQ-decoder" title="Spherical VQ Decoder"> <t> In order to correctly decode the PVQ codewords, the decoder must perform exactly the same bits to pulses conversion as the encoder. </t> <section anchor="cwrs-decoder" title="Index Decoding"> <t> The decoding of the codeword from the index is performed as specified in <xref target="PVQ"></xref>, as implemented in function decode_pulses() (cwrs.c). </t> </section> <section anchor="normalised-decoding" title="Normalised Vector Decoding"> <t> The spherical codebook is decoded by alg_unquant() (vq.c). The index of the PVQ entry is obtained from the range coder and converted to a pulse vector by decode_pulses() (cwrs.c). </t> <t>The decoded normalized vector for each band is equal to</t> <t>X' = y/||y||,</t> <t> This operation is implemented in mix_pitch_and_residual() (vq.c), which is the same function as used in the encoder. </t> </section> </section> <section anchor="denormalization" title="Denormalization"> <t> Just like each band was normalized in the encoder, the last step of the decoder before the inverse MDCT is to denormalize the bands. Each decoded normalized band is multiplied by the square root of the decoded energy. This is done by denormalise_bands() (bands.c). </t> </section> <section anchor="inverse-mdct" title="Inverse MDCT"> <t>The inverse MDCT implementation has no special characteristics. The input is N frequency-domain samples and the output is 2*N time-domain samples, while scaling by 1/2. The output is windowed using the same window as the encoder. The IMDCT and windowing are performed by mdct_backward (mdct.c). If a time-domain pre-emphasis window was applied in the encoder, the (inverse) time-domain de-emphasis window is applied on the IMDCT result. After the overlap-add process, the signal is de-emphasized using the inverse of the pre-emphasis filter used in the encoder: 1/A(z)=1/(1-alpha_p*z^-1). </t> </section> <section anchor="Packet Loss Concealment" title="Packet Loss Concealment (PLC)"> <t> Packet loss concealment (PLC) is an optional decoder-side feature which SHOULD be included when transmitting over an unreliable channel. Because PLC is not part of the bit-stream, there are several possible ways to implement PLC with different complexity/quality trade-offs. The PLC in the reference implementation finds a periodicity in the decoded signal and repeats the windowed waveform using the pitch offset. The windowed waveform is overlapped in such a way as to preserve the time-domain aliasing cancellation with the previous frame and the next frame. This is implemented in celt_decode_lost() (mdct.c). </t> </section> </section> </section> <section anchor="security" title="Security Considerations"> <t> The codec needs to take appropriate security considerations into account, as outlined in <xref target="DOS"/> and <xref target="SECGUIDE"/>. It is extremely important for the decoder to be robust against malicious payloads. Malicious payloads must not cause the decoder to overrun its allocated memory or to take much more resources to decode. Although problems in encoders are typically rarer, the same applies to the encoder. Malicious audio stream must not cause the encoder to misbehave because this would allow an attacker to attack transcoding gateways. </t> <t> In its current version, the Opus codec likely does NOT meet these security considerations, so it should be used with caution. </t> </section> <section title="IANA Considerations "> <t> This document has no actions for IANA. </t> </section> <section anchor="Acknowledgments" title="Acknowledgments"> <t> Thanks to all other developers, including Raymond Chen, Soeren Skak Jensen, Gregory Maxwell, Christopher Montgomery, Karsten Vandborg Soerensen, and Timothy Terriberry. </t> </section> </middle> <back> <references title="Informative References"> <reference anchor='SILK'> <front> <title>SILK Speech Codec</title> <author initials='K.' surname='Vos' fullname='K. Vos'> <organization /></author> <author initials='S.' surname='Jensen' fullname='S. Jensen'> <organization /></author> <author initials='K.' surname='Soerensen' fullname='K. Soerensen'> <organization /></author> <date year='2010' month='March' /> <abstract> <t></t> </abstract></front> <seriesInfo name='Internet-Draft' value='draft-vos-silk-01' /> <format type='TXT' target='http://tools.ietf.org/html/draft-vos-silk-01' /> </reference> <reference anchor="laroia-icassp"> <front> <title abbrev="Robust and Efficient Quantization of Speech LSP"> Robust and Efficient Quantization of Speech LSP Parameters Using Structured Vector Quantization </title> <author initials="R.L." surname="Laroia" fullname="R."> <organization/> </author> <author initials="N.P." surname="Phamdo" fullname="N."> <organization/> </author> <author initials="N.F." surname="Farvardin" fullname="N."> <organization/> </author> </front> <seriesInfo name="ICASSP-1991, Proc. IEEE Int. Conf. Acoust., Speech, Signal Processing, pp. 641-644, October" value="1991"/> </reference> <reference anchor="sinervo-norsig"> <front> <title abbrev="SVQ versus MSVQ">Evaluation of Split and Multistage Techniques in LSF Quantization</title> <author initials="U.S." surname="Sinervo" fullname="Ulpu Sinervo"> <organization/> </author> <author initials="J.N." surname="Nurminen" fullname="Jani Nurminen"> <organization/> </author> <author initials="A.H." surname="Heikkinen" fullname="Ari Heikkinen"> <organization/> </author> <author initials="J.S." surname="Saarinen" fullname="Jukka Saarinen"> <organization/> </author> </front> <seriesInfo name="NORSIG-2001, Norsk symposium i signalbehandling, Trondheim, Norge, October" value="2001"/> </reference> <reference anchor="leblanc-tsap"> <front> <title>Efficient Search and Design Procedures for Robust Multi-Stage VQ of LPC Parameters for 4 kb/s Speech Coding</title> <author initials="W.P." surname="LeBlanc" fullname=""> <organization/> </author> <author initials="B." surname="Bhattacharya" fullname=""> <organization/> </author> <author initials="S.A." surname="Mahmoud" fullname=""> <organization/> </author> <author initials="V." surname="Cuperman" fullname=""> <organization/> </author> </front> <seriesInfo name="IEEE Transactions on Speech and Audio Processing, Vol. 1, No. 4, October" value="1993" /> </reference> <reference anchor='CELT'> <front> <title>Constrained-Energy Lapped Transform (CELT) Codec</title> <author initials='J-M.' surname='Valin' fullname='J-M. Valin'> <organization /></author> <author initials='T.' surname='Terriberry' fullname='T. Terriberry'> <organization /></author> <author initials='G.' surname='Maxwell' fullname='G. Maxwell'> <organization /></author> <author initials='C.' surname='Montgomery' fullname='C. Montgomery'> <organization /></author> <date year='2010' month='July' /> <abstract> <t></t> </abstract></front> <seriesInfo name='Internet-Draft' value='draft-valin-celt-codec-02' /> <format type='TXT' target='http://tools.ietf.org/html/draft-valin-celt-codec-02' /> </reference> <reference anchor='DOS'> <front> <title>Internet Denial-of-Service Considerations</title> <author initials='M.' surname='Handley' fullname='M. Handley'> <organization /></author> <author initials='E.' surname='Rescorla' fullname='E. Rescorla'> <organization /></author> <author> <organization>IAB</organization></author> <date year='2006' month='December' /> <abstract> <t>This document provides an overview of possible avenues for denial-of-service (DoS) attack on Internet systems. The aim is to encourage protocol designers and network engineers towards designs that are more robust. We discuss partial solutions that reduce the effectiveness of attacks, and how some solutions might inadvertently open up alternative vulnerabilities. This memo provides information for the Internet community.</t></abstract></front> <seriesInfo name='RFC' value='4732' /> <format type='TXT' octets='91844' target='ftp://ftp.isi.edu/in-notes/rfc4732.txt' /> </reference> <reference anchor='SECGUIDE'> <front> <title>Guidelines for Writing RFC Text on Security Considerations</title> <author initials='E.' surname='Rescorla' fullname='E. Rescorla'> <organization /></author> <author initials='B.' surname='Korver' fullname='B. Korver'> <organization /></author> <date year='2003' month='July' /> <abstract> <t>All RFCs are required to have a Security Considerations section. Historically, such sections have been relatively weak. This document provides guidelines to RFC authors on how to write a good Security Considerations section. This document specifies an Internet Best Current Practices for the Internet Community, and requests discussion and suggestions for improvements.</t></abstract></front> <seriesInfo name='BCP' value='72' /> <seriesInfo name='RFC' value='3552' /> <format type='TXT' octets='110393' target='ftp://ftp.isi.edu/in-notes/rfc3552.txt' /> </reference> <reference anchor="range-coding"> <front> <title>Range encoding: An algorithm for removing redundancy from a digitised message</title> <author initials="G." surname="Nigel" fullname=""><organization/></author> <author initials="N." surname="Martin" fullname=""><organization/></author> <date year="1979" /> </front> <seriesInfo name="Proc. Institution of Electronic and Radio Engineers International Conference on Video and Data Recording" value="" /> </reference> <reference anchor="coding-thesis"> <front> <title>Source coding algorithms for fast data compression</title> <author initials="R." surname="Pasco" fullname=""><organization/></author> <date month="May" year="1976" /> </front> <seriesInfo name="Ph.D. thesis" value="Dept. of Electrical Engineering, Stanford University" /> </reference> <reference anchor="PVQ"> <front> <title>A Pyramid Vector Quantizer</title> <author initials="T." surname="Fischer" fullname=""><organization/></author> <date month="July" year="1986" /> </front> <seriesInfo name="IEEE Trans. on Information Theory, Vol. 32" value="pp. 568-583" /> </reference> </references> <section anchor="ref-implementation" title="Reference Implementation"> <t>This appendix contains the complete source code for the reference implementation of the Opus codec written in C. This implementation can be compiled for either floating-point or fixed-point architectures. </t> <t>The implementation can be compiled with either a C89 or a C99 compiler. It is reasonably optimized for most platforms such that only architecture-specific optimizations are likely to be useful. The FFT used is a slightly modified version of the KISS-FFT package, but it is easy to substitute any other FFT library. </t> <section title="Extracting the source"> <t> The complete source code can be extracted from this draft, by running the following command line: <list style="symbols"> <t><![CDATA[ cat draft-ietf-codec-opus.txt | grep '^ ###' | sed 's/ ###//' | base64 -d > opus_source.tar.gz ]]></t> <t> tar xzvf opus_source.tar.gz </t> </list> </t> </section> <section title="Base64-encoded source code"> <t> <?rfc include="opus_source.base64"?> </t> </section> </section> </back> </rfc>