shithub: opus

ref: b8a30f9ca3776b71adf3a8b344c2351e40256b15
dir: /doc/draft-ietf-codec-opus.xml/

View raw version
<?xml version="1.0" encoding="utf-8"?>
<!DOCTYPE rfc SYSTEM 'rfc2629.dtd'>
<?rfc toc="yes" symrefs="yes" ?>

<rfc ipr="trust200902" category="std" docName="draft-ietf-codec-opus-07">

<front>
<title abbrev="Interactive Audio Codec">Definition of the Opus Audio Codec</title>


<author initials="JM" surname="Valin" fullname="Jean-Marc Valin">
<organization>Octasic Inc.</organization>
<address>
<postal>
<street>4101, Molson Street</street>
<city>Montreal</city>
<region>Quebec</region>
<code></code>
<country>Canada</country>
</postal>
<phone>+1 514 282-8858</phone>
<email>[email protected]</email>
</address>
</author>

<author initials="K." surname="Vos" fullname="Koen Vos">
<organization>Skype Technologies S.A.</organization>
<address>
<postal>
<street>Stadsgarden 6</street>
<city>Stockholm</city>
<region></region>
<code>11645</code>
<country>SE</country>
</postal>
<phone>+46 855 921 989</phone>
<email>[email protected]</email>
</address>
</author>

<author initials="T." surname="Terriberry" fullname="Timothy Terriberry">
<organization>Mozilla Corporation</organization>
<address>
<postal>
<street>650 Castro Street</street>
<city>Mountain View</city>
<region>CA</region>
<code>94041</code>
<country>USA</country>
</postal>
<phone>+1 650 903-0800</phone>
<email>[email protected]</email>
</address>
</author>

<date day="7" month="July" year="2011" />

<area>General</area>

<workgroup></workgroup>

<abstract>
<t>
This document defines the Opus codec, designed for interactive speech and audio
 transmission over the Internet.
</t>
</abstract>
</front>

<middle>

<section anchor="introduction" title="Introduction">
<t>
The Opus codec is a real-time interactive audio codec composed of a linear
 prediction (LP)-based layer and a Modified Discrete Cosine Transform
 (MDCT)-based layer.
The main idea behind using two layers is that in speech, linear prediction
 techniques (such as CELP) code low frequencies more efficiently than transform
 (e.g., MDCT) domain techniques, while the situation is reversed for music and
 higher speech frequencies.
Thus a codec with both layers available can operate over a wider range than
 either one alone and, by combining them, achieve better quality than either
 one individually.
</t>

<t>
The primary normative part of this specification is provided by the source code
 in <xref target="ref-implementation"></xref>.
In general, only the decoder portion of this software is normative, though a
 significant amount of code is shared by both the encoder and decoder.
<!--TODO: Forward reference conformance test-->
The decoder contains significant amounts of integer and fixed-point arithmetic
 which must be performed exactly, including all rounding considerations, so any
 useful specification must make extensive use of domain-specific symbolic
 language to adequately define these operations.
Additionally, any
conflict between the symbolic representation and the included reference
implementation must be resolved. For the practical reasons of compatibility and
testability it would be advantageous to give the reference implementation
priority in any disagreement. The C language is also one of the most
widely understood human-readable symbolic representations for machine
behavior.
For these reasons this RFC uses the reference implementation as the sole
 symbolic representation of the codec.
</t>

<!--TODO: C is not unambiguous; many parts are implementation-defined-->
<t>While the symbolic representation is unambiguous and complete it is not
always the easiest way to understand the codec's operation. For this reason
this document also describes significant parts of the codec in English and
takes the opportunity to explain the rationale behind many of the more
surprising elements of the design. These descriptions are intended to be
accurate and informative, but the limitations of common English sometimes
result in ambiguity, so it is expected that the reader will always read
them alongside the symbolic representation. Numerous references to the
implementation are provided for this purpose. The descriptions sometimes
differ from the reference in ordering or through mathematical simplification
wherever such deviation makes an explanation easier to understand.
For example, the right shift and left shift operations in the reference
implementation are often described using division and multiplication in the text.
In general, the text is focused on the "what" and "why" while the symbolic
representation most clearly provides the "how".
</t>

<section anchor="notation" title="Notation and Conventions">
<t>
The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD",
 "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be
 interpreted as described in RFC 2119.
</t>
<t>
Even when using floating-point, various operations in the codec require
 bit-exact fixed-point behavior.
The notation "Q<spanx style="emph">n</spanx>", where
 <spanx style="emph">n</spanx> is an integer, denotes the number of binary
 digits to the right of the decimal point in a fixed-point number.
For example, a signed Q14 value in a 16-bit word can represent values from
 -2.0 to 1.99993896484375, inclusive.
This notation is for informational purposes only.
Arithmetic, when described, always operates on the underlying integer.
E.g., the text will explicitly indicate any shifts required after a
 multiplication.
</t>
<t>
Expressions, where included in the text, follow C operator rules and
 precedence, with the exception that syntax like "2**n" is used to indicate 2
 raised to the power n.
The text also makes use of the following functions:
</t>

<section anchor="min" title="min(x,y)">
<t>
The smallest of two values x and y.
</t>
</section>

<section anchor="max" title="max(x,y)">
<t>
The largest of two values x and y.
</t>
</section>

<section anchor="clamp" title="clamp(lo,x,hi)">
<figure align="center">
<artwork align="center"><![CDATA[
clamp(lo,x,hi) = max(lo,min(x,hi))
]]></artwork>
</figure>
<t>
With this definition, if lo&gt;hi, the lower bound is the one that is enforced.
</t>
</section>

<section anchor="sign" title="sign(x)">
<t>
The sign of x, i.e.,
<figure align="center">
<artwork align="center"><![CDATA[
          ( -1,  x < 0 ,
sign(x) = <  0,  x == 0 ,
          (  1,  x > 0 .
]]></artwork>
</figure>
</t>
</section>

<section anchor="log2" title="log2(f)">
<t>
The base-two logarithm of f.
</t>
</section>

<section anchor="ilog" title="ilog(n)">
<t>
The minimum number of bits required to store a positive integer n in two's
 complement notation, or 0 for a non-positive integer n.
<figure align="center">
<artwork align="center"><![CDATA[
          ( 0,                 n <= 0,
ilog(n) = <
          ( floor(log2(n))+1,  n > 0
]]></artwork>
</figure>
Examples:
<list style="symbols">
<t>ilog(-1) = 0</t>
<t>ilog(0) = 0</t>
<t>ilog(1) = 1</t>
<t>ilog(2) = 2</t>
<t>ilog(3) = 2</t>
<t>ilog(4) = 3</t>
<t>ilog(7) = 3</t>
</list>
</t>
</section>

</section>

</section>

<section anchor="overview" title="Opus Codec Overview">

<t>
The Opus codec scales from 6&nbsp;kb/s narrowband mono speech to 510&nbsp;kb/s
 fullband stereo music, with algorithmic delays ranging from 5&nbsp;ms to
 65.2&nbsp;ms.
At any given time, either the LP layer, the MDCT layer, or both, may be active.
It can seamlessly switch between all of its various operating modes, giving it
 a great deal of flexibility to adapt to varying content and network
 conditions without renegotiating the current session.
Internally, the codec always operates at a 48&nbsp;kHz sampling rate, though it
 allows input and output of various bandwidths, defined as follows:
</t>
<texttable>
<ttcol>Abbreviation</ttcol>
<ttcol align="right">Audio Bandwidth</ttcol>
<ttcol align="right">Sampling Rate (Effective)</ttcol>
<c>NB (narrowband)</c>       <c>4&nbsp;kHz</c>  <c>8&nbsp;kHz</c>
<c>MB (medium-band)</c>      <c>6&nbsp;kHz</c> <c>12&nbsp;kHz</c>
<c>WB (wideband)</c>         <c>8&nbsp;kHz</c> <c>16&nbsp;kHz</c>
<c>SWB (super-wideband)</c> <c>12&nbsp;kHz</c> <c>24&nbsp;kHz</c>
<c>FB (fullband)</c>        <c>20&nbsp;kHz</c> <c>48&nbsp;kHz</c>
</texttable>
<t>
These can be chosen independently on the encoder and decoder side, e.g., a
 fullband signal can be decoded as wideband, or vice versa.
This approach ensures a sender and receiver can always interoperate, regardless
 of the capabilities of their actual audio hardware.
</t>

<t>
The LP layer is based on the
 <eref target='http://developer.skype.com/silk'>SILK</eref> codec
 <xref target="SILK"></xref>.
It supports NB, MB, or WB audio and frame sizes from 10&nbsp;ms to 60&nbsp;ms,
 and requires an additional 5.2&nbsp;ms look-ahead for noise shaping estimation
 (5&nbsp;ms) and internal resampling (0.2&nbsp;ms).
Like Vorbis and many other modern codecs, SILK is inherently designed for
 variable-bitrate (VBR) coding, though an encoder can with sufficient effort
 produce constant-bitrate (CBR) or near-CBR streams.
</t>

<t>
The MDCT layer is based on the
 <eref target='http://www.celt-codec.org/'>CELT</eref>  codec
 <xref target="CELT"></xref>.
It supports sampling NB, WB, SWB, or FB audio and frame sizes from 2.5&nbsp;ms
 to 20&nbsp;ms, and requires an additional 2.5&nbsp;ms look-ahead due to the
 overlapping MDCT windows.
The CELT codec is inherently designed for CBR coding, but unlike many CBR
 codecs it is not limited to a set of predetermined rates.
It internally allocates bits to exactly fill any given target budget, and an
 encoder can produce a VBR stream by varying the target on a per-frame basis.
The MDCT layer is not used for speech when the audio bandwidth is WB or less,
 as it is not useful there.
On the other hand, non-speech signals are not always adequately coded using
 linear prediction, so for music only the MDCT layer should be used.
</t>

<t>
A hybrid mode allows the use of both layers simultaneously with a frame size of
 10 or 20&nbsp;ms and a SWB or FB audio bandwidth.
Each frame is split into a low frequency signal and a high frequency signal,
 with a cutoff of 8&nbsp;kHz.
The LP layer then codes the low frequency signal, followed by the MDCT layer
 coding the high frequency signal.
In the MDCT layer, all bands below 8&nbsp;kHz are discarded, so there is no
 coding redundancy between the two layers.
</t>

<t>
At the decoder, the two decoder outputs are simply added together.
To compensate for the different look-aheads required by each layer, the CELT
 encoder input is delayed by an additional 2.7&nbsp;ms.
This ensures that low frequencies and high frequencies arrive at the same time.
This extra delay MAY be reduced by an encoder by using less lookahead for noise
 shaping or using a simpler resampler in the LP layer, but this will reduce
 quality.
However, the base 2.5&nbsp;ms look-ahead in the CELT layer cannot be reduced in
 the encoder because it is needed for the MDCT overlap, whose size is fixed by
 the decoder.
</t>

<t>
Both layers use the same entropy coder, avoiding any waste from "padding bits"
 between them.
The hybrid approach makes it easy to support both CBR and VBR coding.
Although the LP layer is VBR, the bit allocation of the MDCT layer can produce
 a final stream that is CBR by using all the bits left unused by the LP layer.
</t>

</section>

<section anchor="modes" title="Codec Modes">
<t>
As described, the two layers can be combined in three possible operating modes:
<list style="numbers">
<t>A LP-only mode for use in low bitrate connections with an audio bandwidth of
 WB or less,</t>
<t>A hybrid (LP+MDCT) mode for SWB or FB speech at medium bitrates, and</t>
<t>An MDCT-only mode for very low delay speech transmission as well as music
 transmission.</t>
</list>
A single packet may contain multiple audio frames, however they must share a
 common set of parameters, including the operating mode, audio bandwidth, frame
 size, and channel count.
A single-byte table-of-contents (TOC) header signals which of the various modes
 and configurations a given packet uses.
It is composed of a frame count code, "c", a stereo flag, "s", and a
 configuration number, "config", arranged as illustrated in
 <xref target="toc_byte"/>.
A description of each of these fields follows.
</t>

<figure anchor="toc_byte" title="The TOC byte">
<artwork align="center"><![CDATA[
 0
 0 1 2 3 4 5 6 7
+-+-+-+-+-+-+-+-+
| c |s| config  |
+-+-+-+-+-+-+-+-+
]]></artwork>
</figure>

<t>
The top five bits of the TOC byte, labeled "config", encode one of 32 possible
 configurations of operating mode, audio bandwidth, and frame size.
<xref target="config_bits"/> lists the parameters for each configuration.
</t>
<texttable anchor="config_bits" title="TOC Byte Configuration Parameters">
<ttcol>Configuration Number(s)</ttcol>
<ttcol>Mode</ttcol>
<ttcol>Bandwidth</ttcol>
<ttcol>Frame Size(s)</ttcol>
<c>0...3</c>   <c>LP-only</c>   <c>NB</c>  <c>10, 20, 40, 60&nbsp;ms</c>
<c>4...7</c>   <c>LP-only</c>   <c>MB</c>  <c>10, 20, 40, 60&nbsp;ms</c>
<c>8...11</c>  <c>LP-only</c>   <c>WB</c>  <c>10, 20, 40, 60&nbsp;ms</c>
<c>12...13</c> <c>Hybrid</c>    <c>SWB</c> <c>10, 20&nbsp;ms</c>
<c>14...15</c> <c>Hybrid</c>    <c>FB</c>  <c>10, 20&nbsp;ms</c>
<c>16...19</c> <c>MDCT-only</c> <c>NB</c>  <c>2.5, 5, 10, 20&nbsp;ms</c>
<c>20...23</c> <c>MDCT-only</c> <c>WB</c>  <c>2.5, 5, 10, 20&nbsp;ms</c>
<c>24...27</c> <c>MDCT-only</c> <c>SWB</c> <c>2.5, 5, 10, 20&nbsp;ms</c>
<c>28...31</c> <c>MDCT-only</c> <c>FB</c>  <c>2.5, 5, 10, 20&nbsp;ms</c>
</texttable>

<t>
One additional bit, labeled "s", is used to signal mono vs. stereo, with 0
 indicating mono and 1 indicating stereo.
</t>

<section title="Frame packing">
<t>
The remaining two bits of the TOC byte, labeled "c", code the number of frames per packet
 (codes 0 to 3) as follows:
<list style="symbols">
<t>0:    1 frame in the packet</t>
<t>1:    2 frames in the packet, each with equal compressed size</t>
<t>2:    2 frames in the packet, with different compressed sizes</t>
<t>3:    an arbitrary number of frames in the packet</t>
</list>
</t>

<t>
A well-formed Opus packet MUST contain at least one byte with the TOC
 information, though the frame(s) within a packet MAY be zero bytes long.
</t>

<t>
When a packet contains multiple VBR frames, the compressed length of one or
 more of these frames is indicated with a one or two byte sequence, with the
 meaning of the first byte as follows:
<list style="symbols">
<t>0:          No frame (DTX or lost packet)</t>
<!--TODO: Would be nice to be clearer about the distinction between "frame
 size" (in samples or ms) and "the compressed size of the frame" (in bytes).
"the compressed length of the frame" is maybe a little better, but not when we
 jump back and forth to talking about sizes.-->
<t>1...251:    Size of the frame in bytes</t>
<t>252...255:  A second byte is needed. The total size is (size[1]*4)+size[0]</t>
</list>
</t>

<t>
The maximum representable size is 255*4+255=1275&nbsp;bytes. This limit MUST NOT
be exceeded, even when no length field is used.
For 20&nbsp;ms frames, this represents a bitrate of 510&nbsp;kb/s, which is
 approximately the highest useful rate for lossily compressed fullband stereo
 music.
Beyond this point, lossless codecs are more appropriate.
It is also roughly the maximum useful rate of the MDCT layer, as shortly
 thereafter quality no longer improves with additional bits due to limitations
 on the codebook sizes.
</t>

<section title="One frame in the packet (code 0)">
<t>
For code 0 packets, the TOC byte is immediately followed by N-1&nbsp;bytes of
 compressed data for a single frame (where N is the size of the packet),
 as illustrated in <xref target="code0_packet"/>.
</t>
<figure anchor="code0_packet" title="A Code 0 Packet" align="center">
<artwork align="center"><![CDATA[
 0                   1                   2                   3
 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
|0|0|s| config  |                                               |
+-+-+-+-+-+-+-+-+                                               |
|                    Compressed frame 1 (N-1 bytes)...          :
:                                                               |
|                                                               |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
]]></artwork>
</figure>
</section>

<section title="Two frames in the packet, each with equal compressed size (code 1)">
<t>
For code 1 packets, the TOC byte is immediately followed by the
 (N-1)/2&nbsp;bytes of compressed data for the first frame, followed by
 (N-1)/2&nbsp;bytes of compressed data for the second frame, as illustrated in
 <xref target="code1_packet"/>.
The number of payload bytes available for compressed data, N-1, MUST be even
 for all code 1 packets.
</t>
<figure anchor="code1_packet" title="A Code 1 Packet" align="center">
<artwork align="center"><![CDATA[
 0                   1                   2                   3
 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
|1|0|s| config  |                                               |
+-+-+-+-+-+-+-+-+                                               :
|             Compressed frame 1 ((N-1)/2 bytes)...             |
:                               +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
|                               |                               |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+                               :
|             Compressed frame 2 ((N-1)/2 bytes)...             |
:                                               +-+-+-+-+-+-+-+-+
|                                               |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
]]></artwork>
</figure>
</section>

<section title="Two frames in the packet, with different compressed sizes (code 2)">
<t>
For code 2 packets, the TOC byte is followed by a one or two byte sequence
 indicating the the length of the first frame (marked N1 in the figure below),
 followed by N1 bytes of compressed data for the first frame.
The remaining N-N1-2 or N-N1-3&nbsp;bytes are the compressed data for the
 second frame.
This is illustrated in <xref target="code2_packet"/>.
The length of the first frame, N1, MUST be no larger than the size of the
 payload remaining after decoding that length for all code 2 packets.
</t>
<figure anchor="code2_packet" title="A Code 2 Packet" align="center">
<artwork align="center"><![CDATA[
 0                   1                   2                   3
 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
|0|1|s| config  | N1 (1-2 bytes):                               |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+                               :
|               Compressed frame 1 (N1 bytes)...                |
:                               +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
|                               |                               |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+                               |
|                     Compressed frame 2...                     :
:                                                               |
|                                                               |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
]]></artwork>
</figure>
</section>

<section title="Arbitrary number of frames in the packet (code 3)">
<t>
For code 3 packets, the TOC byte is followed by a byte encoding the number of
 frames in the packet in bits 0 to 5 (marked "M" in the figure below), with bit
 6 indicating whether or not padding is inserted (marked "p" in the figure
 below), and bit 7 indicating VBR (marked "v" in the figure below).
M MUST NOT be zero, and the audio duration contained within a packet MUST NOT
 exceed 120&nbsp;ms.
This limits the maximum frame count for any frame size to 48 (for 2.5&nbsp;ms
 frames), with lower limits for longer frame sizes.
<xref target="frame_count_byte"/> illustrates the layout of the frame count
 byte.
</t>
<figure anchor="frame_count_byte" title="The frame count byte">
<artwork align="center"><![CDATA[
 0
 0 1 2 3 4 5 6 7
+-+-+-+-+-+-+-+-+
|     M     |p|v|
+-+-+-+-+-+-+-+-+
]]></artwork>
</figure>
<t>
When padding is used, the number of bytes of padding is encoded in the
 bytes following the frame count byte.
Values from 0...254 indicate that 0...254&nbsp;bytes of padding are included,
 in addition to the byte(s) used to indicate the size of the padding.
If the value is 255, then the size of the additional padding is 254&nbsp;bytes,
 plus the padding value encoded in the next byte.
The additional padding bytes appear at the end of the packet, and SHOULD be set
 to zero by the encoder, however the decoder MUST accept any value for the
 padding bytes.
By using code 255 multiple times, it is possible to create a packet of any
 specific, desired size.
Let P be the total amount of padding, including both the trailing padding bytes
 themselves and the header bytes used to indicate how many there are.
Then P MUST be no more than N-2 for CBR packets, or N-M-1 for VBR packets.
</t>
<t>
In the CBR case, the compressed length of each frame in bytes is equal to the
 number of remaining bytes in the packet after subtracting the (optional)
 padding, (N-2-P), divided by M.
This number MUST be an integer multiple of M.
The compressed data for all M frames then follows, each of size
 (N-2-P)/M&nbsp;bytes, as illustrated in <xref target="code3cbr_packet"/>.
</t>

<figure anchor="code3cbr_packet" title="A CBR Code 3 Packet" align="center">
<artwork align="center"><![CDATA[
 0                   1                   2                   3
 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
|1|1|s| config  |     M     |p|0|  Padding length (Optional)    :
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
|                                                               |
:            Compressed frame 1 ((N-2-P)/M bytes)...            :
|                                                               |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
|                                                               |
:            Compressed frame 2 ((N-2-P)/M bytes)...            :
|                                                               |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
|                                                               |
:                              ...                              :
|                                                               |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
|                                                               |
:            Compressed frame M ((N-2-P)/M bytes)...            :
|                                                               |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
:                     Padding (Optional)...                     |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
]]></artwork>
</figure>

<t>
In the VBR case, the (optional) padding length is followed by M-1 frame
 lengths (indicated by "N1" to "N[M-1]" in the figure below), each encoded in a
 one or two byte sequence as described above.
The packet MUST contain enough data for the M-1 lengths after the (optional)
 padding, and the sum of these lengths MUST be no larger than the number of
 bytes remaining in the packet after decoding them.
The compressed data for all M frames follows, each frame consisting of the
 indicated number of bytes, with the final frame consuming any remaining bytes
 before the final padding, as illustrated in <xref target="code3cbr_packet"/>.
The number of header bytes (TOC byte, frame count byte, padding length bytes,
 and frame length bytes), plus the length of the first M-1 frames themselves,
 plus the length of the padding MUST be no larger than N, the total size of the
 packet.
</t>

<figure anchor="code3vbr_packet" title="A VBR Code 3 Packet" align="center">
<artwork align="center"><![CDATA[
 0                   1                   2                   3
 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
|1|1|s| config  |     M     |p|1| Padding length (Optional)     :
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
: N1 (1-2 bytes): N2 (1-2 bytes):     ...       :     N[M-1]    |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
|                                                               |
:               Compressed frame 1 (N1 bytes)...                :
|                                                               |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
|                                                               |
:               Compressed frame 2 (N2 bytes)...                :
|                                                               |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
|                                                               |
:                              ...                              :
|                                                               |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
|                                                               |
:                     Compressed frame M...                     :
|                                                               |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
:                     Padding (Optional)...                     |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
]]></artwork>
</figure>
</section>
</section>

<section anchor="examples" title="Examples">
<t>
Simplest case, one NB mono 20&nbsp;ms SILK frame:
</t>

<figure>
<artwork><![CDATA[
 0                   1                   2                   3
 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
|0|0|0|    1    |               compressed data...              :
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
]]></artwork>
</figure>

<t>
Two FB mono 5&nbsp;ms CELT frames of the same compressed size:
</t>

<figure>
<artwork><![CDATA[
 0                   1                   2                   3
 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
|1|0|0|   29    |               compressed data...              :
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
]]></artwork>
</figure>

<t>
Two FB mono 20&nbsp;ms hybrid frames of different compressed size:
</t>

<figure>
<artwork><![CDATA[
 0                   1                   2                   3
 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
|1|1|0|   15    |     2     |0|1|      N1       |               |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+               |
|                       compressed data...                      :
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
]]></artwork>
</figure>

<t>
Four FB stereo 20&nbsp;ms CELT frames of the same compressed size:
</t>

<figure>
<artwork><![CDATA[
 0                   1                   2                   3
 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
|1|1|1|   31    |     4     |0|0|      compressed data...       :
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
]]></artwork>
</figure>
</section>

<section title="Extending Opus">
<t>
A receiver MUST NOT process packets which violate the rules above as normal Opus
 packets. They are reserved for future applications, such as in-band headers (containing
 metadata, etc.) or multichannel support.
</t>
</section>

</section>

<section title="Opus Decoder">
<t>
The Opus decoder consists of two main blocks: the SILK decoder and the CELT decoder.
The output of the Opus decode is the sum of the outputs from the SILK and CELT decoders
with proper sample rate conversion and delay compensation as illustrated in the
block diagram below. At any given time, one or both of the SILK and CELT decoders
may be active.
</t>
<figure>
<artwork>
<![CDATA[
                       +-------+    +----------+
                       | SILK  |    |  sample  |
                    +->|encoder|--->|   rate   |----+
bit-    +-------+   |  |       |    |conversion|    v
stream  | Range |---+  +-------+    +----------+  /---\  audio
------->|decoder|                                 | + |------>
        |       |---+  +-------+    +----------+  \---/
        +-------+   |  | CELT  |    | Delay    |    ^
                    +->|decoder|----| compens- |----+
                       |       |    | ation    |
                       +-------+    +----------+
]]>
</artwork>
</figure>

<section anchor="range-decoder" title="Range Decoder">
<t>
Opus uses an entropy coder based on <xref target="range-coding"></xref>,
which is itself a rediscovery of the FIFO arithmetic code introduced by <xref target="coding-thesis"></xref>.
It is very similar to arithmetic encoding, except that encoding is done with
digits in any base instead of with bits,
so it is faster when using larger bases (i.e., an octet). All of the
calculations in the range coder must use bit-exact integer arithmetic.
</t>
<t>
Symbols may also be coded as <spanx style="emph">raw bits</spanx> packed
 directly into the bitstream, bypassing the range coder.
These are packed backwards starting at the end of the frame.
This reduces complexity and makes the stream more resilient to bit errors, as
 corruption in the raw bits will not desynchronize the decoding process, unlike
 corruption in the input to the range decoder.
Raw bits are only used in the CELT layer.
</t>
<t>
Each symbol coded by the range coder is drawn from a finite alphabet and coded
 in a separate <spanx style="emph">context</spanx>, which describes the size of
 the alphabet and the relative frequency of each symbol in that alphabet.
Opus only uses static contexts.
They are not adapted to the statistics of the data as it is coded.
</t>
<t>
The parameters needed to encode or decode a symbol in a given context are
 represented by a three-tuple (fl,fh,ft), with
 0 &lt;= fl &lt; fh &lt;= ft &lt;= 65535.
The values of this tuple are derived from the probability model for the
 symbol, represented by traditional <spanx style="emph">frequency counts</spanx>
 (although, since Opus uses static contexts, these are not updated as symbols
 are decoded).
Let f[i] be the frequency of the <spanx style="emph">i</spanx>th symbol in a
 context with <spanx style="emph">n</spanx> symbols total.
Then the three-tuple corresponding to the <spanx style="emph">k</spanx>th
 symbol is given by
</t>
<figure align="center">
<artwork align="center"><![CDATA[
     k-1                             n-1
     __                              __
fl = \  f[i],  fh = fl + f[k],  ft = \  f[i]
     /_                              /_
     i=0                             i=0
]]></artwork>
</figure>
<t>
The range decoder extracts the symbols and integers encoded using the range
 encoder in <xref target="range-encoder"/>.
The range decoder maintains an internal state vector composed of the two-tuple
 (val,rng), representing the difference between the high end of the current
 range and the actual coded value, minus one, and the size of the current
 range, respectively.
Both val and rng are 32-bit unsigned integer values.
The decoder initializes rng to 128 and initializes val to 127 minus the top 7
 bits of the first input octet.
It then immediately normalizes the range using the procedure described in
 <xref target="range-decoder-renorm"/>.
</t>

<section anchor="decoding-symbols" title="Decoding Symbols">
<t>
Decoding a symbol is a two-step process.
The first step determines a 16-bit unsigned value fs, which lies within the
 range of some symbol in the current context.
The second step updates the range decoder state with the three-tuple (fl,fh,ft)
 corresponding to that symbol.
</t>
<t>
The first step is implemented by ec_decode() (entdec.c), which computes
 fs = ft - min(val/(rng/ft)+1, ft).
The divisions here are exact integer division.
</t>
<t>
The decoder then identifies the symbol in the current context corresponding to
 fs; i.e., the one whose three-tuple (fl,fh,ft) satisfies fl &lt;= fs &lt; fh.
It uses this tuple to update val according to
 val = val - (rng/ft)*(ft-fh).
If fl is greater than zero, then the decoder updates rng using
 rng = (rng/ft)*(fh-fl).
Otherwise, it updates rng using rng = rng - (rng/ft)*(ft-fh).
After these updates, implemented by ec_dec_update() (entdec.c), it normalizes
 the range using the procedure in the next section, and returns the index of
 the identified symbol.
</t>
<t>
With this formulation, all the truncation error from using finite precision
 arithmetic accumulates in symbol 0.
This makes the cost of coding a 0 slightly smaller, on average, than the
 negative log of its estimated probability and makes the cost of coding any
 other symbol slightly larger.
When contexts are designed so that 0 is the most probable symbol, which is
 often the case, this strategy minimizes the inefficiency introduced by the
 finite precision.
</t>

<section anchor="range-decoder-renorm" title="Renormalization">
<t>
To normalize the range, the decoder repeats the following process, implemented
 by ec_dec_normalize() (entdec.c), until rng > 2**23.
If rng is already greater than 2**23, the entire process is skipped.
First, it sets rng to (rng&lt;&lt;8).
Then it reads the next 8 bits of input into sym, using the remaining bit from
 the previous input octet as the high bit of sym, and the top 7 bits of the
 next octet as the remaining bits of sym.
If no more input octets remain, it uses zero bits instead.
Then, it sets val to (val&lt;&lt;8)+(255-sym)&amp;0x7FFFFFFF.
</t>
<t>
It is normal and expected that the range decoder will read several bytes
 into the raw bits data (if any) at the end of the packet by the time the frame
 is completely decoded, as illustrated in <xref target="finalize-example"/>.
This same data MUST also be returned as raw bits when requested.
The encoder is expected to terminate the stream in such a way that the decoder
 will decode the intended values regardless of the data contained in the raw
 bits.
<xref target="encoder-finalizing"/> describes a procedure for doing this.
If the range decoder consumes all of the bytes belonging to the current frame,
 it MUST continue to use zero when any further input bytes are required, even
 if there is additional data in the current packet, from padding or other
 frames.
</t>

<figure anchor="finalize-example" title="Illustrative example of raw bits
 overlapping range coder data">
<artwork align="center"><![CDATA[
 n               n+1             n+2             n+3
 7 6 5 4 3 2 1 0 7 6 5 4 3 2 1 0 7 6 5 4 3 2 1 0 7 6 5 4 3 2 1 0
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
:     | <----------- Overlap region ------------> |             :
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
      ^                                           ^
      |   End of data buffered by the range coder |
...-----------------------------------------------+
      |
      | End of data consumed by raw bits
      +-------------------------------------------------------...
]]></artwork>
</figure>
</section>
</section>

<section anchor="decoding-alternate" title="Alternate Decoding Methods">
<t>
The reference implementation uses three additional decoding methods that are
 exactly equivalent to the above, but make assumptions and simplifications that
 allow for a more efficient implementation.
</t>
<section title="ec_decode_bin()">
<t>
The first is ec_decode_bin() (entdec.c), defined using the parameter ftb
 instead of ft.
It is mathematically equivalent to calling ec_decode() with
 ft = (1&lt;&lt;ftb), but avoids one of the divisions.
</t>
</section>
<section title="ec_dec_bit_logp()">
<t>
The next is ec_dec_bit_logp() (entdec.c), which decodes a single binary symbol,
 replacing both the ec_decode() and ec_dec_update() steps.
The context is described by a single parameter, logp, which is the absolute
 value of the base-2 logarithm of the probability of a "1".
It is mathematically equivalent to calling ec_decode() with
 ft = (1&lt;&lt;logp), followed by ec_dec_update() with
 fl = 0, fh = (1&lt;&lt;logp)-1, ft = (1&lt;&lt;logp) if the returned value
 of fs is less than (1&lt;&lt;logp)-1 (a "0" was decoded), and with
 fl = (1&lt;&lt;logp)-1, fh = ft = (1&lt;&lt;logp) otherwise (a "1" was
 decoded).
The implementation requires no multiplications or divisions.
</t>
</section>
<section title="ec_dec_icdf()">
<t>
The last is ec_dec_icdf() (entdec.c), which decodes a single symbol with a
 table-based context of up to 8 bits, also replacing both the ec_decode() and
 ec_dec_update() steps, as well as the search for the decoded symbol in between.
The context is described by two parameters, an icdf
 (<spanx style="emph">inverse</spanx> cumulative distribution function)
 table and ftb.
As with ec_decode_bin(), (1&lt;&lt;ftb) is equivalent to ft.
idcf[k], on the other hand, stores (1&lt;&lt;ftb)-fh for the kth symbol in
 the context, which is equal to (1&lt;&lt;ftb)-fl for the (k+1)st symbol.
fl for the 0th symbol is assumed to be 0, and the table is terminated by a
 value of 0 (where fh&nbsp;==&nbsp;ft).
</t>
<t>
The function is mathematically equivalent to calling ec_decode() with
 ft = (1&lt;&lt;ftb), using the returned value fs to search the table for the
 first entry where fs &lt; (1&lt;&lt;ftb)-icdf[k], and calling
 ec_dec_update() with fl = (1&lt;&lt;ftb)-icdf[k-1] (or 0 if k&nbsp;==&nbsp;0),
 fh = (1&lt;&lt;ftb)-idcf[k], and ft = (1&lt;&lt;ftb).
Combining the search with the update allows the division to be replaced by a
 series of multiplications (which are usually much cheaper), and using an
 inverse CDF allows the use of an ftb as large as 8 in an 8-bit table without
 any special cases.
This is the primary interface with the range decoder in the SILK layer, though
 it is used in a few places in the CELT layer as well.
</t>
<t>
Although icdf[k] is more convenient for the code, the frequency counts, f[k],
 are a more natural representation of the probability distribution function
 (PDF) for a given symbol.
Therefore this draft lists the latter, not the former, when describing the
 context in which a symbol is coded as a list, e.g., {4, 4, 4, 4}/16 for a
 uniform context with four possible values and ft=16.
The value of ft after the slash is always the sum of the entries in the PDF,
 but is included for convenience.
Contexts with identical probabilities, f[k]/ft, but different values of ft
 (or equivalently, ftb) are not the same, and cannot, in general, be used in
 place of one another.
An icdf table is also not capable of representing a PDF where the first symbol
 has 0 probability.
In such contexts, ec_dec_icdf() can decode the symbol by using a table that
 drops the entries for any initial zero-probability values and adding the
 constant offset of the first value with a non-zero probability to its return
 value.
</t>
</section>
</section>

<section anchor="decoding-bits" title="Decoding Raw Bits">
<t>
The raw bits used by the CELT layer are packed at the end of the packet, with
 the least significant bit of the first value to be packed in the least
 significant bit of the last byte, filling up to the most significant bit in
 the last byte, and continuing on to the least significant bit of the
 penultimate byte, and so on.
The reference implementation reads them using ec_dec_bits() (entdec.c).
Because the range decoder must read several bytes ahead in the stream, as
 described in <xref target="range-decoder-renorm"/>, the input consumed by the
 raw bits MAY overlap with the input consumed by the range coder, and a decoder
 MUST allow this.
The format should render it impossible to attempt to read more raw bits than
 there are actual bits in the frame, though a decoder MAY wish to check for
 this and report an error.
</t>
</section>

<section anchor="decoding-ints" title="Decoding Uniformly Distributed Integers">
<t>
The ec_dec_uint() (entdec.c) function decodes one of ft equiprobable values in
 the range 0 to ft-1, inclusive, each with a frequency of 1, where ft may be as
 large as 2**32-1.
Because ec_decode() is limited to a total frequency of 2**16-1, this is split
 up into a range coded symbol representing up to 8 of the high bits of the
 value, and, if necessary, raw bits representing the remaining bits.
The limit of 8 bits in the range coded symbol is a trade-off between
 implementation complexity, modeling error (since the symbols no longer truly
 have equal coding cost) and rounding error introduced by the range coder
 itself (which gets larger as more bits are included).
Using raw bits reduces the maximum number of divisions required in the worst
 case, but means that it may be possible to decode a value outside the range
 0 to ft-1, inclusive.
</t>

<t>
ec_dec_uint() takes a single, positive parameter, ft, which is not necessarily
 a power of two, and returns an integer, t, whose value lies between 0 and
 ft-1, inclusive.
Let ftb = ilog(ft-1), i.e., the number of bits required to store ft-1 in two's
 complement notation.
If ftb is 8 or less, then t is decoded with t = ec_decode(ft), and the range
 coder state is updated using the three-tuple (t,t+1,ft).
</t>
<t>
If ftb is greater than 8, then the top 8 bits of t are decoded using
 t = ec_decode((ft-1&gt;&gt;ftb-8)+1),
 the decoder state is updated using the three-tuple
 (t,t+1,(ft-1&gt;&gt;ftb-8)+1), and the remaining bits are decoded as raw bits,
 setting t = t&lt;&lt;ftb-8|ec_dec_bits(ftb-8).
If, at this point, t >= ft, then the current frame is corrupt.
In that case, the decoder should assume there has been an error in the coding,
 decoding, or transmission and SHOULD take measures to conceal the
 error and/or report to the application that a problem has occurred.
</t>

</section>

<section anchor="decoder-tell" title="Current Bit Usage">
<t>
The bit allocation routines in the CELT decoder need a conservative upper bound
 on the number of bits that have been used from the current frame thus far,
 including both range coder bits and raw bits.
This drives allocation decisions that must match those made in the encoder.
The upper bound is computed in the reference implementation to whole-bit
 precision by the function ec_tell() (entcode.h) and to fractional 1/8th bit
 precision by the function ec_tell_frac() (entcode.c).
Like all operations in the range coder, it must be implemented in a bit-exact
 manner, and must produce exactly the same value returned by the same functions
 in the encoder after encoding the same symbols.
</t>
<t>
ec_tell() is guaranteed to return ceil(ec_tell_frac()/8.0).
In various places the codec will check to ensure there is enough room to
 contain a symbol before attempting to decode it.
In practice, although the number of bits used so far is an upper bound,
 decoding a symbol whose probability model suggests it has a worst-case cost of
 p 1/8th bits may actually advance the return value of ec_tell_frac() by
 p-1, p, or p+1 1/8th bits, due to approximation error in that upper bound,
 truncation error in the range coder, and for large values of ft, modeling
 error in ec_dec_uint().
</t>
<t>
However, this error is bounded, and periodic calls to ec_tell() or
 ec_tell_frac() at precisely defined points in the decoding process prevent it
 from accumulating.
For a symbol that requires a whole number of bits (i.e., ft/(fh-fl) is a power
 of two, including values of ft larger than 2**8 with ec_dec_uint()), and there
 are at least p 1/8th bits available, decoding the symbol will never advance
 the decoder past the end of the frame, i.e., will never
 <spanx style="emph">bust</spanx> the budget.
Frames contain a whole number of bits, and the return value of ec_tell_frac()
 will only advance by more than p 1/8th bits in this case if there was a
 fractional number of bits remaining, and by no more than the fractional part.
However, when p is not a whole number of bits, an extra 1/8th bit is required
 to ensure decoding the symbol will not bust.
</t>
<t>
The reference implementation keeps track of the total number of whole bits that
 have been processed by the decoder so far in a variable nbits_total, including
 the (possibly fractional number of bits) that are currently buffered (but not
 consumed) inside the range coder.
nbits_total is initialized to 33 just after the initial range renormalization
 process completes (or equivalently, it can be initialized to 9 before the
 first renormalization).
The extra two bits over the actual amount buffered by the range coder
 guarantees that it is an upper bound and that there is enough room for the
 encoder to terminate the stream.
Each iteration through the range coder's renormalization loop increases
 nbits_total by 8.
Reading raw bits increases nbits_total by the number of raw bits read.
</t>

<section anchor="ec_tell" title="ec_tell()">
<t>
The whole number of bits buffered in rng may be estimated via l = ilog(rng).
ec_tell() then becomes a simple matter of removing these bits from the total.
It returns (nbits_total - l).
</t>
<t>
In a newly initialized decoder, before any symbols have been read, this reports
 that 1 bit has been used.
This is the bit reserved for termination of the encoder.
</t>
</section>

<section anchor="ec_tell_frac" title="ec_tell_frac()">
<t>
ec_tell_frac() estimates the number of bits buffered in rng to fractional
 precision.
Since rng must be greater than 2**23 after renormalization, l must be at least
 24.
Let r = rng&gt;&gt;(l-16), so that 32768 &lt;= r &lt; 65536, an unsigned Q15
 value representing the fractional part of rng.
Then the following procedure can be used to add one bit of precision to l.
First, update r = r*r&gt;&gt;15.
Then add the 16th bit of r to l via l = 2*l + (r&gt;&gt;16).
Finally, if this bit was a 1, reduce r by a factor of two via r = r&gt;&gt;1,
 so that it once again lies in the range 32768 &lt;= r &lt; 65536.
</t>
<t>
This procedure is repeated three times to extend l to 1/8th bit precision.
ec_tell_frac() then returns (nbits_total*8 - l).
</t>
</section>

</section>

</section>

<section anchor='outline_decoder' title='SILK Decoder'>
<t>
The decoder's LP layer uses a modified version of the SILK codec (herein simply
 called "SILK"), which runs a decoded excitation signal through adaptive
 long-term and short-term prediction synthesis filters.
It runs in NB, MB, and WB modes internally.
When used in a hybrid frame in SWB or FB mode, the LP layer itself still only
 runs in WB mode.
</t>
<t>
Internally, the LP layer of a single Opus frame is composed of either a single
 10&nbsp;ms SILK frame or between one and three 20&nbsp;ms SILK frames.
Each SILK frame is in turn composed of either two or four 5&nbsp;ms subframes.
Optional Low Bit-Rate Redundancy (LBRR) frames, which are reduced-bitrate
 encodings of previous SILK frames, may appear to aid in recovery from packet
 loss.
If present, these appear before the regular SILK frames.
They are in most respects identical to regular active SILK frames, except that
 they are usually encoded with a lower bitrate, and from here on this draft
 will use "SILK frame" to refer to either one and "regular SILK frame" if it
 needs to draw a distinction between the two.
</t>
<t>
All of these frames and subframes are decoded from the same range coder, with
 no padding between them.
Thus packing multiple SILK frames in a single Opus frame saves, on average,
 half a byte per SILK frame.
It also allows some parameters to be predicted from prior SILK frames in the
 same Opus frame, since this does not degrade packet loss robustness (beyond
 any penalty for merely using fewer, larger packets to store multiple frames).
</t>

<t>
Stereo support in SILK uses a variant of mid-side coding, allowing a mono
 decoder to simply decode the mid channel.
However, the data for the two channels is interleaved, so a mono decoder must
 still unpack the data for the side channel.
It would be required to do so anyway for hybrid Opus frames, or to support
 decoding individual 20&nbsp;ms frames.
</t>

<texttable anchor="silk_symbols">
<ttcol align="center">Symbol(s)</ttcol>
<ttcol align="center">PDF</ttcol>
<ttcol align="center">Condition</ttcol>
<c>VAD flags</c>     <c>{1, 1}/2</c>                    <c></c>
<c>LBRR flag</c>     <c>{1, 1}/2</c>                    <c></c>
<c>Per-frame LBRR flags</c> <c><xref target="silk_lbrr_flags"/></c> <c><xref target="silk_lbrr_flags"/></c>
<c>Frame Type</c>    <c><xref target="silk_frame_type"/></c>    <c></c>
<c>Gain index</c>    <c><xref target="silk_gains"/></c> <c></c>
<postamble>
Order of the symbols in the SILK section of the bit-stream.
</postamble>
</texttable>

<section title="Decoder Modules">
<t>
An overview of the decoder is given in <xref target="decoder_figure"/>.
</t>
<figure align="center" anchor="decoder_figure">
<artwork align="center">
<![CDATA[

   +---------+    +------------+
-->| Range   |--->| Decode     |---------------------------+
 1 | Decoder | 2  | Parameters |----------+       5        |
   +---------+    +------------+     4    |                |
                       3 |                |                |
                        \/               \/               \/
                  +------------+   +------------+   +------------+
                  | Generate   |-->| LTP        |-->| LPC        |-->
                  | Excitation |   | Synthesis  |   | Synthesis  | 6
                  +------------+   +------------+   +------------+

1: Range encoded bitstream
2: Coded parameters
3: Pulses and gains
4: Pitch lags and LTP coefficients
5: LPC coefficients
6: Decoded signal
]]>
</artwork>
<postamble>Decoder block diagram.</postamble>
</figure>

          <section title='Range Decoder'>
            <t>
              The range decoder decodes the encoded parameters from the received bitstream. Output from this function includes the pulses and gains for the excitation signal generation, as well as LTP and LSF codebook indices, which are needed for decoding LTP and LPC coefficients needed for LTP and LPC synthesis filtering the excitation signal, respectively.
            </t>
          </section>

          <section title='Decode Parameters'>
            <t>
              Pulses and gains are decoded from the parameters that were decoded by the range decoder.
            </t>

            <t>
              When a voiced frame is decoded and LTP codebook selection and indices are received, LTP coefficients are decoded using the selected codebook by choosing the vector that corresponds to the given codebook index in that codebook. This is done for each of the four subframes.
              The LPC coefficients are decoded from the LSF codebook by first adding the chosen LSF vector and the decoded LSF residual signal. The resulting LSF vector is stabilized using the same method that was used in the encoder, see
              <xref target='lsf_stabilizer_overview_section' />. The LSF coefficients are then converted to LPC coefficients, and passed on to the LPC synthesis filter.
            </t>
          </section>

          <section title='Generate Excitation'>
            <t>
              The pulses signal is multiplied with the quantization gain to create the excitation signal.
            </t>
          </section>

          <section title='LTP Synthesis'>
            <t>
              For voiced speech, the excitation signal e(n) is input to an LTP synthesis filter that will recreate the long term correlation that was removed in the LTP analysis filter and generate an LPC excitation signal e_LPC(n), according to
              <figure align="center">
                <artwork align="center">
                  <![CDATA[
                   d
                  __
e_LPC(n) = e(n) + \  e_LPC(n - L - i) * b_i,
                  /_
                 i=-d
]]>
                </artwork>
              </figure>
              using the pitch lag L, and the decoded LTP coefficients b_i.
              The number of LTP coefficients is 5, and thus d&nbsp;=&nbsp;2.

              For unvoiced speech, the output signal is simply a copy of the excitation signal, i.e., e_LPC(n) = e(n).
            </t>
          </section>

          <section title='LPC Synthesis'>
            <t>
              In a similar manner, the short-term correlation that was removed in the LPC analysis filter is recreated in the LPC synthesis filter. The LPC excitation signal e_LPC(n) is filtered using the LTP coefficients a_i, according to
              <figure align="center">
                <artwork align="center">
                  <![CDATA[
                 d_LPC
                  __
y(n) = e_LPC(n) + \  y(n - i) * a_i,
                  /_
                  i=1
]]>
                </artwork>
              </figure>
              where d_LPC is the LPC synthesis filter order, and y(n) is the decoded output signal.
            </t>
          </section>
        </section>

<!--TODO: Document mandated decoder resets-->

<section title="Header Bits">
<t>
The LP layer begins with two to eight header bits, decoded in silk_Decode()
 (silk_dec_API.c).
These consist of one Voice Activity Detection (VAD) bit per frame (up to 3),
 followed by a single flag indicating the presence of LBRR frames.
For a stereo packet, these flags correspond to the mid channel, and a second
 set of flags is included for the side channel.
</t>
<t>
Because these are the first symbols decoded by the range coder, they can be
 extracted directly from the upper bits of the first byte of compressed data.
Thus, a receiver can determine if an Opus frame contains any active SILK frames
 without the overhead of using the range decoder.
</t>
</section>

<section anchor="silk_lbrr_flags" title="LBRR Flags">
<t>
For Opus frames longer than 20&nbsp;ms, a set of per-frame LBRR flags is
 decoded for each channel that has its LBRR flag set.
For 40&nbsp;ms Opus frames the 2-frame LBRR flag PDF from
 <xref target="silk_lbrr_flag_pdfs"/> is used, and for 60&nbsp;ms Opus frames
 the 3-frame LBRR flag PDF is used.
For each channel, the resulting 2- or 3-bit integer contains the corresponding
 LBRR flag for each frame, packed in order from the LSb to the MSb.
</t>

<texttable anchor="silk_lbrr_flag_pdfs" title="LBRR Flag PDFs">
<ttcol>Frame Size</ttcol>
<ttcol>PDF</ttcol>
<c>40&nbsp;ms</c> <c>{0, 53, 53, 150}/256</c>
<c>60&nbsp;ms</c> <c>{0, 41, 20, 29, 41, 15, 28, 82}/256</c>
</texttable>

<t>
LBRR frames do not include their own separate VAD flags.
An LBRR frame is only meant to be transmitted for active speech, thus all LBRR
 frames are treated as active.
</t>
</section>

<section title="SILK Frame Contents">
<t>
Each SILK frame includes a set of side information that encodes the frame type,
 quantization type and gains, short-term prediction filter coefficients, LSF
 interpolation weight, long-term prediction filter lags and gains, and a
 pseudorandom number generator (PRNG) seed.
This is followed by the quantized excitation signal.
</t>
<section anchor="silk_frame_type" title="Frame Type">
<t>
Each SILK frame begins with a single <spanx style="emph">frame type</spanx>
 symbol that jointly codes the signal type and quantization offset type of the
 corresponding frame.
If the current frame is a regular SILK frame whose VAD bit was not set (an
 <spanx style="emph">inactive</spanx> frame), then the frame type symbol takes
 on the value either 0 or 1 and is decoded using the first PDF in
 <xref target="silk_frame_type_pdfs"/>.
If the frame is an LBRR frame or a regular SILK frame whose VAD flag was set
 (an <spanx style="emph">active</spanx> frame), then the symbol ranges from 2
 to 5, inclusive, and is decoded using the second PDF in
 <xref target="silk_frame_type_pdfs"/>.
<xref target="silk_frame_type_table"/> translates between the value of the
 frame type symbol and the corresponding signal type and quantization offset
 type.
</t>

<texttable anchor="silk_frame_type_pdfs" title="Frame Type PDFs">
<ttcol>VAD Flag</ttcol>
<ttcol>PDF</ttcol>
<c>Inactive</c> <c>{26, 230, 0, 0, 0, 0}/256</c>
<c>Active</c>   <c>{0, 0, 24, 74, 148, 10}/256</c>
</texttable>

<texttable anchor="silk_frame_type_table"
 title="Signal Type and Quantization Offset Type from Frame Type">
<ttcol>Frame Type</ttcol>
<ttcol>Signal Type</ttcol>
<ttcol align="right">Quantization Offset Type</ttcol>
<c>0</c> <c>Inactive</c> <c>0</c>
<c>1</c> <c>Inactive</c> <c>1</c>
<c>2</c> <c>Unvoiced</c> <c>0</c>
<c>3</c> <c>Unvoiced</c> <c>1</c>
<c>4</c> <c>Voiced</c>   <c>0</c>
<c>5</c> <c>Voiced</c>   <c>1</c>
</texttable>

</section>

<section anchor="silk_gains" title="Sub-Frame Gains">
<t>
A separate quantization gain is coded for each 5&nbsp;ms subframe.
These gains control the step size between quantization levels of the excitation
 signal and, therefore, the quality of the reconstruction.
They are independent of the pitch gains coded for voiced frames.
The quantization gains are themselves uniformly quantized to 6&nbsp;bits on a
 log scale, giving them a resolution of approximately 1.369&nbsp;dB and a range
 of approximately 1.94&nbsp;dB to 88.21&nbsp;dB.
</t>
<t>
For the first LBRR frame, an LBRR frame where the previous LBRR frame was not
 coded, or the first regular SILK frame in an Opus frame, the first subframe
 uses an independent coding method.
The 3 most significant bits of the quantization gain are decoded using a PDF
 selected from <xref target="silk_independent_gain_msb_pdfs"/> based on the
 decoded signal type.
</t>

<texttable anchor="silk_independent_gain_msb_pdfs"
 title="PDFs for Independent Quantization Gain MSb Coding">
<ttcol align="left">Signal Type</ttcol>
<ttcol align="left">PDF</ttcol>
<c>Inactive</c> <c>{32, 112, 68, 29, 12,  1,  1, 1}/256</c>
<c>Unvoiced</c> <c>{2,   17, 45, 60, 62, 47, 19, 4}/256</c>
<c>Voiced</c>   <c>{1,    3, 26, 71, 94, 50,  9, 2}/256</c>
</texttable>

<t>
The 3 least significant bits are decoded using a uniform PDF:
</t>
<texttable anchor="silk_independent_gain_lsb_pdf"
 title="PDF for Independent Quantization Gain LSb Coding">
<ttcol align="left">PDF</ttcol>
<c>{32, 32, 32, 32, 32, 32, 32, 32}/256</c>
</texttable>

<t>
For all other subframes (including the first subframe of frames not listed as
 using independent coding above), the quantization gain is coded relative to
 the gain from the previous subframe.
The PDF in <xref target="silk_delta_gain_pdf"/> yields a delta gain index
 between 0 and 40, inclusive.
</t>
<texttable anchor="silk_delta_gain_pdf"
 title="PDF for Delta Quantization Gain Coding">
<ttcol align="left">PDF</ttcol>
<c>{6,   5,  11,  31, 132,  21,   8,   4,
    3,   2,   2,   2,   1,   1,   1,   1,
    1,   1,   1,   1,   1,   1,   1,   1,
    1,   1,   1,   1,   1,   1,   1,   1,
    1,   1,   1,   1,   1,   1,   1,   1,   1}/256</c>
</texttable>
<t>
The following formula translates this index into a quantization gain for the
 current subframe using the gain from the previous subframe:
</t>
<figure align="center">
<artwork align="center"><![CDATA[
log_gain = min(max(2*gain_index - 16,
                   previous_log_gain + gain_index - 4), 63)
]]></artwork>
</figure>
<t>
silk_gains_dequant() (silk_gain_quant.c) dequantizes the gain for the
 <spanx style="emph">k</spanx>th subframe and converts it into a linear Q16
 scale factor via
</t>
<figure align="center">
<artwork align="center"><![CDATA[
gain_Q16[k] = silk_log2lin((0x1D1C71*log_gain>>16) + 2090)
]]></artwork>
</figure>
<t>
The function silk_log2lin() (silk_log2lin.c) computes an approximation of
 of 2**(inLog_Q7/128.0), where inLog_Q7 is its Q7 input.
Let i = inLog_Q7&gt;&gt;7 be the integer part of inLogQ7 and
 f = inLog_Q7&amp;127 be the fractional part.
Then, if i &lt; 16, then
<figure align="center">
<artwork align="center"><![CDATA[
(1<<i) + (((-174*f*(128-f)>>16)+f)>>7)*(1<<i)
]]></artwork>
</figure>
 yields the approximate exponential.
Otherwise, silk_log2lin uses
<figure align="center">
<artwork align="center"><![CDATA[
(1<<i) + ((-174*f*(128-f)>>16)+f)*((1<<i)>>7) .
]]></artwork>
</figure>
</t>
</section>

<section anchor="silk_nlsfs" title="Normalized Line Spectral Frequencies">

<t>
Normalized Line Spectral Frequencies (LSFs) follow the quantization gains in
 the bitstream, and represent the Linear Prediction Coefficients (LPCs) for the
 current SILK frame.
Once decoded, they form an increasing list of Q15 values between 0 and 1.
These represent the interleaved zeros on the unit circle between 0 and pi
 (hence "normalized") in the standard decomposition of the LPC filter into a
 symmetric part and an anti-symmetric part (P and Q in
 <xref target="silk_nlsf2lpc"/>).
Because of non-linear effects in the decoding process, an implementation SHOULD
 match the fixed-point arithmetic described in this section exactly.
An encoder SHOULD also use the same process.
</t>
<t>
The normalized LSFs are coded using a two-stage vector quantizer (VQ).
NB and MB frames use an order-10 predictor, while WB frames use an order-16
 predictor, and thus have different sets of tables.
The first VQ stage uses a 32-element codebook, coded with one of the PDFs in
 <xref target="silk_nlsf_stage1_pdfs"/>, depending on the audio bandwidth and
 the signal type of the current SILK frame.
This yields a single index, <spanx style="emph">I1</spanx>, for the entire
 frame.
This indexes an element in a coarse codebook, selects the PDFs for the
 second stage of the VQ, and selects the prediction weights used to remove
 intra-frame redundancy from the second stage.
The actual codebook elements are listed in
 <xref target="silk_nlsf_nbmb_codebook"/> and
 <xref target="silk_nlsf_wb_codebook"/>, but they are not needed until the last
 stages of reconstructing the LSF coefficients.
</t>

<texttable anchor="silk_nlsf_stage1_pdfs"
 title="PDFs for Normalized LSF Index Stage-1 Decoding">
<ttcol align="left">Audio Bandwidth</ttcol>
<ttcol align="left">Signal Type</ttcol>
<ttcol align="left">PDF</ttcol>
<c>NB or MB</c> <c>Inactive or unvoiced</c>
<c>
{44, 34, 30, 19, 21, 12, 11,  3,
  3,  2, 16,  2,  2,  1,  5,  2,
  1,  3,  3,  1,  1,  2,  2,  2,
  3,  1,  9,  9,  2,  7,  2,  1}/256
</c>
<c>NB or MB</c> <c>Voiced</c>
<c>
{1, 10,  1,  8,  3,  8,  8, 14,
13, 14,  1, 14, 12, 13, 11, 11,
12, 11, 10, 10, 11,  8,  9,  8,
 7,  8,  1,  1,  6,  1,  6,  5}/256
</c>
<c>WB</c> <c>Inactive or unvoiced</c>
<c>
{31, 21,  3, 17,  1,  8, 17,  4,
  1, 18, 16,  4,  2,  3,  1, 10,
  1,  3, 16, 11, 16,  2,  2,  3,
  2, 11,  1,  4,  9,  8,  7,  3}/256
</c>
<c>WB</c> <c>Voiced</c>
<c>
{1,  4, 16,  5, 18, 11,  5, 14,
15,  1,  3, 12, 13, 14, 14,  6,
14, 12,  2,  6,  1, 12, 12, 11,
10,  3, 10,  5,  1,  1,  1,  3}/256
</c>
</texttable>

<t>
A total of 16 PDFs are available for the LSF residual in the second stage: the
 8 (a...h) for NB and MB frames given in
 <xref target="silk_nlsf_stage2_nbmb_pdfs"/>, and the 8 (i...p) for WB frames
 given in <xref target="silk_nlsf_stage2_wb_pdfs"/>.
Which PDF is used for which coefficient is driven by the index, I1,
 decoded in the first stage.
<xref target="silk_nlsf_nbmb_stage2_cb_sel"/> lists the letter of the
 corresponding PDF for each normalized LSF coefficient for NB and MB, and
 <xref target="silk_nlsf_wb_stage2_cb_sel"/> lists the same information for WB.
</t>

<texttable anchor="silk_nlsf_stage2_nbmb_pdfs"
 title="PDFs for NB/MB Normalized LSF Index Stage-2 Decoding">
<ttcol align="left">Codebook</ttcol>
<ttcol align="left">PDF</ttcol>
<c>a</c> <c>{1,   1,   1,  15, 224,  11,   1,   1,   1}/256</c>
<c>b</c> <c>{1,   1,   2,  34, 183,  32,   1,   1,   1}/256</c>
<c>c</c> <c>{1,   1,   4,  42, 149,  55,   2,   1,   1}/256</c>
<c>d</c> <c>{1,   1,   8,  52, 123,  61,   8,   1,   1}/256</c>
<c>e</c> <c>{1,   3,  16,  53, 101,  74,   6,   1,   1}/256</c>
<c>f</c> <c>{1,   3,  17,  55,  90,  73,  15,   1,   1}/256</c>
<c>g</c> <c>{1,   7,  24,  53,  74,  67,  26,   3,   1}/256</c>
<c>h</c> <c>{1,   1,  18,  63,  78,  58,  30,   6,   1}/256</c>
</texttable>

<texttable anchor="silk_nlsf_stage2_wb_pdfs"
 title="PDFs for WB Normalized LSF Index Stage-2 Decoding">
<ttcol align="left">Codebook</ttcol>
<ttcol align="left">PDF</ttcol>
<c>i</c> <c>{1,   1,   1,   9, 232,   9,   1,   1,   1}/256</c>
<c>j</c> <c>{1,   1,   2,  28, 186,  35,   1,   1,   1}/256</c>
<c>k</c> <c>{1,   1,   3,  42, 152,  53,   2,   1,   1}/256</c>
<c>l</c> <c>{1,   1,  10,  49, 126,  65,   2,   1,   1}/256</c>
<c>m</c> <c>{1,   4,  19,  48, 100,  77,   5,   1,   1}/256</c>
<c>n</c> <c>{1,   1,  14,  54, 100,  72,  12,   1,   1}/256</c>
<c>o</c> <c>{1,   1,  15,  61,  87,  61,  25,   4,   1}/256</c>
<c>p</c> <c>{1,   7,  21,  50,  77,  81,  17,   1,   1}/256</c>
</texttable>

<texttable anchor="silk_nlsf_nbmb_stage2_cb_sel"
 title="Codebook Selection for NB/MB Normalized LSF Index Stage 2 Decoding">
<ttcol>I1</ttcol>
<ttcol>Coefficient</ttcol>
<c/>
<c><spanx style="vbare">0&nbsp;1&nbsp;2&nbsp;3&nbsp;4&nbsp;5&nbsp;6&nbsp;7&nbsp;8&nbsp;9</spanx></c>
<c> 0</c>
<c><spanx style="vbare">a&nbsp;a&nbsp;a&nbsp;a&nbsp;a&nbsp;a&nbsp;a&nbsp;a&nbsp;a&nbsp;a</spanx></c>
<c> 1</c>
<c><spanx style="vbare">b&nbsp;d&nbsp;b&nbsp;c&nbsp;c&nbsp;b&nbsp;c&nbsp;b&nbsp;b&nbsp;b</spanx></c>
<c> 2</c>
<c><spanx style="vbare">c&nbsp;b&nbsp;b&nbsp;b&nbsp;b&nbsp;b&nbsp;b&nbsp;b&nbsp;b&nbsp;b</spanx></c>
<c> 3</c>
<c><spanx style="vbare">b&nbsp;c&nbsp;c&nbsp;c&nbsp;c&nbsp;b&nbsp;c&nbsp;b&nbsp;b&nbsp;b</spanx></c>
<c> 4</c>
<c><spanx style="vbare">c&nbsp;d&nbsp;d&nbsp;d&nbsp;d&nbsp;c&nbsp;c&nbsp;c&nbsp;c&nbsp;c</spanx></c>
<c> 5</c>
<c><spanx style="vbare">a&nbsp;f&nbsp;d&nbsp;d&nbsp;c&nbsp;c&nbsp;c&nbsp;c&nbsp;b&nbsp;b</spanx></c>
<c> g</c>
<c><spanx style="vbare">a&nbsp;c&nbsp;c&nbsp;c&nbsp;c&nbsp;c&nbsp;c&nbsp;c&nbsp;c&nbsp;b</spanx></c>
<c> 7</c>
<c><spanx style="vbare">c&nbsp;d&nbsp;g&nbsp;e&nbsp;e&nbsp;e&nbsp;f&nbsp;e&nbsp;f&nbsp;f</spanx></c>
<c> 8</c>
<c><spanx style="vbare">c&nbsp;e&nbsp;f&nbsp;f&nbsp;e&nbsp;f&nbsp;e&nbsp;g&nbsp;e&nbsp;e</spanx></c>
<c> 9</c>
<c><spanx style="vbare">c&nbsp;e&nbsp;e&nbsp;h&nbsp;e&nbsp;f&nbsp;e&nbsp;f&nbsp;f&nbsp;e</spanx></c>
<c>10</c>
<c><spanx style="vbare">e&nbsp;d&nbsp;d&nbsp;d&nbsp;c&nbsp;d&nbsp;c&nbsp;c&nbsp;c&nbsp;c</spanx></c>
<c>11</c>
<c><spanx style="vbare">b&nbsp;f&nbsp;f&nbsp;g&nbsp;e&nbsp;f&nbsp;e&nbsp;f&nbsp;f&nbsp;f</spanx></c>
<c>12</c>
<c><spanx style="vbare">c&nbsp;h&nbsp;e&nbsp;g&nbsp;f&nbsp;f&nbsp;f&nbsp;f&nbsp;f&nbsp;f</spanx></c>
<c>13</c>
<c><spanx style="vbare">c&nbsp;h&nbsp;f&nbsp;f&nbsp;f&nbsp;f&nbsp;f&nbsp;g&nbsp;f&nbsp;e</spanx></c>
<c>14</c>
<c><spanx style="vbare">d&nbsp;d&nbsp;f&nbsp;e&nbsp;e&nbsp;f&nbsp;e&nbsp;f&nbsp;e&nbsp;e</spanx></c>
<c>15</c>
<c><spanx style="vbare">c&nbsp;d&nbsp;d&nbsp;f&nbsp;f&nbsp;e&nbsp;e&nbsp;e&nbsp;e&nbsp;e</spanx></c>
<c>16</c>
<c><spanx style="vbare">c&nbsp;e&nbsp;e&nbsp;g&nbsp;e&nbsp;f&nbsp;e&nbsp;f&nbsp;f&nbsp;f</spanx></c>
<c>17</c>
<c><spanx style="vbare">c&nbsp;f&nbsp;e&nbsp;g&nbsp;f&nbsp;f&nbsp;f&nbsp;e&nbsp;f&nbsp;e</spanx></c>
<c>18</c>
<c><spanx style="vbare">c&nbsp;h&nbsp;e&nbsp;f&nbsp;e&nbsp;f&nbsp;e&nbsp;f&nbsp;f&nbsp;f</spanx></c>
<c>19</c>
<c><spanx style="vbare">c&nbsp;f&nbsp;e&nbsp;g&nbsp;h&nbsp;g&nbsp;f&nbsp;g&nbsp;f&nbsp;e</spanx></c>
<c>20</c>
<c><spanx style="vbare">d&nbsp;g&nbsp;h&nbsp;e&nbsp;g&nbsp;f&nbsp;f&nbsp;g&nbsp;e&nbsp;f</spanx></c>
<c>21</c>
<c><spanx style="vbare">c&nbsp;h&nbsp;g&nbsp;e&nbsp;e&nbsp;e&nbsp;f&nbsp;e&nbsp;f&nbsp;f</spanx></c>
<c>22</c>
<c><spanx style="vbare">e&nbsp;f&nbsp;f&nbsp;e&nbsp;g&nbsp;g&nbsp;f&nbsp;g&nbsp;f&nbsp;e</spanx></c>
<c>23</c>
<c><spanx style="vbare">c&nbsp;f&nbsp;f&nbsp;g&nbsp;f&nbsp;g&nbsp;e&nbsp;g&nbsp;e&nbsp;e</spanx></c>
<c>24</c>
<c><spanx style="vbare">e&nbsp;f&nbsp;f&nbsp;f&nbsp;d&nbsp;h&nbsp;e&nbsp;f&nbsp;f&nbsp;e</spanx></c>
<c>25</c>
<c><spanx style="vbare">c&nbsp;d&nbsp;e&nbsp;f&nbsp;f&nbsp;g&nbsp;e&nbsp;f&nbsp;f&nbsp;e</spanx></c>
<c>26</c>
<c><spanx style="vbare">c&nbsp;d&nbsp;c&nbsp;d&nbsp;d&nbsp;e&nbsp;c&nbsp;d&nbsp;d&nbsp;d</spanx></c>
<c>27</c>
<c><spanx style="vbare">b&nbsp;b&nbsp;c&nbsp;c&nbsp;c&nbsp;c&nbsp;c&nbsp;d&nbsp;c&nbsp;c</spanx></c>
<c>28</c>
<c><spanx style="vbare">e&nbsp;f&nbsp;f&nbsp;g&nbsp;g&nbsp;g&nbsp;f&nbsp;g&nbsp;e&nbsp;f</spanx></c>
<c>29</c>
<c><spanx style="vbare">d&nbsp;f&nbsp;f&nbsp;e&nbsp;e&nbsp;e&nbsp;e&nbsp;d&nbsp;d&nbsp;c</spanx></c>
<c>30</c>
<c><spanx style="vbare">c&nbsp;f&nbsp;d&nbsp;h&nbsp;f&nbsp;f&nbsp;e&nbsp;e&nbsp;f&nbsp;e</spanx></c>
<c>31</c>
<c><spanx style="vbare">e&nbsp;e&nbsp;f&nbsp;e&nbsp;f&nbsp;g&nbsp;f&nbsp;g&nbsp;f&nbsp;e</spanx></c>
</texttable>

<texttable anchor="silk_nlsf_wb_stage2_cb_sel"
 title="Codebook Selection for WB Normalized LSF Index Stage 2 Decoding">
<ttcol>I1</ttcol>
<ttcol>Coefficient</ttcol>
<c/>
<c><spanx style="vbare">0&nbsp;&nbsp;1&nbsp;&nbsp;2&nbsp;&nbsp;3&nbsp;&nbsp;4&nbsp;&nbsp;5&nbsp;&nbsp;6&nbsp;&nbsp;7&nbsp;&nbsp;8&nbsp;&nbsp;9&nbsp;10&nbsp;11&nbsp;12&nbsp;13&nbsp;14&nbsp;15</spanx></c>
<c> 0</c>
<c><spanx style="vbare">i&nbsp;&nbsp;i&nbsp;&nbsp;i&nbsp;&nbsp;i&nbsp;&nbsp;i&nbsp;&nbsp;i&nbsp;&nbsp;i&nbsp;&nbsp;i&nbsp;&nbsp;i&nbsp;&nbsp;i&nbsp;&nbsp;i&nbsp;&nbsp;i&nbsp;&nbsp;i&nbsp;&nbsp;i&nbsp;&nbsp;i&nbsp;&nbsp;i</spanx></c>
<c> 1</c>
<c><spanx style="vbare">k&nbsp;&nbsp;l&nbsp;&nbsp;l&nbsp;&nbsp;l&nbsp;&nbsp;l&nbsp;&nbsp;l&nbsp;&nbsp;k&nbsp;&nbsp;k&nbsp;&nbsp;k&nbsp;&nbsp;k&nbsp;&nbsp;k&nbsp;&nbsp;j&nbsp;&nbsp;j&nbsp;&nbsp;j&nbsp;&nbsp;i&nbsp;&nbsp;l</spanx></c>
<c> 2</c>
<c><spanx style="vbare">k&nbsp;&nbsp;n&nbsp;&nbsp;n&nbsp;&nbsp;l&nbsp;&nbsp;p&nbsp;&nbsp;m&nbsp;&nbsp;m&nbsp;&nbsp;n&nbsp;&nbsp;k&nbsp;&nbsp;n&nbsp;&nbsp;m&nbsp;&nbsp;n&nbsp;&nbsp;n&nbsp;&nbsp;m&nbsp;&nbsp;l&nbsp;&nbsp;l</spanx></c>
<c> 3</c>
<c><spanx style="vbare">i&nbsp;&nbsp;k&nbsp;&nbsp;j&nbsp;&nbsp;k&nbsp;&nbsp;k&nbsp;&nbsp;j&nbsp;&nbsp;j&nbsp;&nbsp;j&nbsp;&nbsp;j&nbsp;&nbsp;j&nbsp;&nbsp;i&nbsp;&nbsp;i&nbsp;&nbsp;i&nbsp;&nbsp;i&nbsp;&nbsp;i&nbsp;&nbsp;j</spanx></c>
<c> 4</c>
<c><spanx style="vbare">i&nbsp;&nbsp;o&nbsp;&nbsp;n&nbsp;&nbsp;m&nbsp;&nbsp;o&nbsp;&nbsp;m&nbsp;&nbsp;p&nbsp;&nbsp;n&nbsp;&nbsp;m&nbsp;&nbsp;m&nbsp;&nbsp;m&nbsp;&nbsp;n&nbsp;&nbsp;n&nbsp;&nbsp;m&nbsp;&nbsp;m&nbsp;&nbsp;l</spanx></c>
<c> 5</c>
<c><spanx style="vbare">i&nbsp;&nbsp;l&nbsp;&nbsp;n&nbsp;&nbsp;n&nbsp;&nbsp;m&nbsp;&nbsp;l&nbsp;&nbsp;l&nbsp;&nbsp;n&nbsp;&nbsp;l&nbsp;&nbsp;l&nbsp;&nbsp;l&nbsp;&nbsp;l&nbsp;&nbsp;l&nbsp;&nbsp;l&nbsp;&nbsp;k&nbsp;&nbsp;m</spanx></c>
<c> 6</c>
<c><spanx style="vbare">i&nbsp;&nbsp;i&nbsp;&nbsp;i&nbsp;&nbsp;i&nbsp;&nbsp;i&nbsp;&nbsp;i&nbsp;&nbsp;i&nbsp;&nbsp;i&nbsp;&nbsp;i&nbsp;&nbsp;i&nbsp;&nbsp;i&nbsp;&nbsp;i&nbsp;&nbsp;i&nbsp;&nbsp;i&nbsp;&nbsp;i&nbsp;&nbsp;i</spanx></c>
<c> 7</c>
<c><spanx style="vbare">i&nbsp;&nbsp;k&nbsp;&nbsp;o&nbsp;&nbsp;l&nbsp;&nbsp;p&nbsp;&nbsp;k&nbsp;&nbsp;n&nbsp;&nbsp;l&nbsp;&nbsp;m&nbsp;&nbsp;n&nbsp;&nbsp;n&nbsp;&nbsp;m&nbsp;&nbsp;l&nbsp;&nbsp;l&nbsp;&nbsp;k&nbsp;&nbsp;l</spanx></c>
<c> 8</c>
<c><spanx style="vbare">i&nbsp;&nbsp;o&nbsp;&nbsp;k&nbsp;&nbsp;o&nbsp;&nbsp;o&nbsp;&nbsp;m&nbsp;&nbsp;n&nbsp;&nbsp;m&nbsp;&nbsp;o&nbsp;&nbsp;n&nbsp;&nbsp;m&nbsp;&nbsp;m&nbsp;&nbsp;n&nbsp;&nbsp;l&nbsp;&nbsp;l&nbsp;&nbsp;l</spanx></c>
<c> 9</c>
<c><spanx style="vbare">k&nbsp;&nbsp;j&nbsp;&nbsp;i&nbsp;&nbsp;i&nbsp;&nbsp;i&nbsp;&nbsp;i&nbsp;&nbsp;i&nbsp;&nbsp;i&nbsp;&nbsp;i&nbsp;&nbsp;i&nbsp;&nbsp;i&nbsp;&nbsp;i&nbsp;&nbsp;i&nbsp;&nbsp;i&nbsp;&nbsp;i&nbsp;&nbsp;i</spanx></c>
<c>j0</c>
<c><spanx style="vbare">i&nbsp;&nbsp;j&nbsp;&nbsp;i&nbsp;&nbsp;i&nbsp;&nbsp;i&nbsp;&nbsp;i&nbsp;&nbsp;i&nbsp;&nbsp;i&nbsp;&nbsp;i&nbsp;&nbsp;i&nbsp;&nbsp;i&nbsp;&nbsp;i&nbsp;&nbsp;i&nbsp;&nbsp;i&nbsp;&nbsp;i&nbsp;&nbsp;j</spanx></c>
<c>11</c>
<c><spanx style="vbare">k&nbsp;&nbsp;k&nbsp;&nbsp;l&nbsp;&nbsp;m&nbsp;&nbsp;n&nbsp;&nbsp;l&nbsp;&nbsp;l&nbsp;&nbsp;l&nbsp;&nbsp;l&nbsp;&nbsp;l&nbsp;&nbsp;l&nbsp;&nbsp;l&nbsp;&nbsp;k&nbsp;&nbsp;k&nbsp;&nbsp;j&nbsp;&nbsp;l</spanx></c>
<c>12</c>
<c><spanx style="vbare">k&nbsp;&nbsp;k&nbsp;&nbsp;l&nbsp;&nbsp;l&nbsp;&nbsp;m&nbsp;&nbsp;l&nbsp;&nbsp;l&nbsp;&nbsp;l&nbsp;&nbsp;l&nbsp;&nbsp;l&nbsp;&nbsp;l&nbsp;&nbsp;l&nbsp;&nbsp;l&nbsp;&nbsp;k&nbsp;&nbsp;j&nbsp;&nbsp;l</spanx></c>
<c>13</c>
<c><spanx style="vbare">l&nbsp;&nbsp;m&nbsp;&nbsp;m&nbsp;&nbsp;m&nbsp;&nbsp;o&nbsp;&nbsp;m&nbsp;&nbsp;m&nbsp;&nbsp;n&nbsp;&nbsp;l&nbsp;&nbsp;n&nbsp;&nbsp;m&nbsp;&nbsp;m&nbsp;&nbsp;n&nbsp;&nbsp;m&nbsp;&nbsp;l&nbsp;&nbsp;m</spanx></c>
<c>14</c>
<c><spanx style="vbare">i&nbsp;&nbsp;o&nbsp;&nbsp;m&nbsp;&nbsp;n&nbsp;&nbsp;m&nbsp;&nbsp;p&nbsp;&nbsp;n&nbsp;&nbsp;k&nbsp;&nbsp;o&nbsp;&nbsp;n&nbsp;&nbsp;p&nbsp;&nbsp;m&nbsp;&nbsp;m&nbsp;&nbsp;l&nbsp;&nbsp;n&nbsp;&nbsp;l</spanx></c>
<c>15</c>
<c><spanx style="vbare">i&nbsp;&nbsp;j&nbsp;&nbsp;i&nbsp;&nbsp;j&nbsp;&nbsp;j&nbsp;&nbsp;j&nbsp;&nbsp;j&nbsp;&nbsp;j&nbsp;&nbsp;j&nbsp;&nbsp;j&nbsp;&nbsp;i&nbsp;&nbsp;i&nbsp;&nbsp;i&nbsp;&nbsp;i&nbsp;&nbsp;j&nbsp;&nbsp;i</spanx></c>
<c>16</c>
<c><spanx style="vbare">j&nbsp;&nbsp;o&nbsp;&nbsp;n&nbsp;&nbsp;p&nbsp;&nbsp;n&nbsp;&nbsp;m&nbsp;&nbsp;n&nbsp;&nbsp;l&nbsp;&nbsp;m&nbsp;&nbsp;n&nbsp;&nbsp;m&nbsp;&nbsp;m&nbsp;&nbsp;m&nbsp;&nbsp;l&nbsp;&nbsp;l&nbsp;&nbsp;m</spanx></c>
<c>17</c>
<c><spanx style="vbare">j&nbsp;&nbsp;l&nbsp;&nbsp;l&nbsp;&nbsp;m&nbsp;&nbsp;m&nbsp;&nbsp;l&nbsp;&nbsp;l&nbsp;&nbsp;n&nbsp;&nbsp;k&nbsp;&nbsp;l&nbsp;&nbsp;l&nbsp;&nbsp;n&nbsp;&nbsp;n&nbsp;&nbsp;n&nbsp;&nbsp;l&nbsp;&nbsp;m</spanx></c>
<c>18</c>
<c><spanx style="vbare">k&nbsp;&nbsp;l&nbsp;&nbsp;l&nbsp;&nbsp;k&nbsp;&nbsp;k&nbsp;&nbsp;k&nbsp;&nbsp;l&nbsp;&nbsp;k&nbsp;&nbsp;j&nbsp;&nbsp;k&nbsp;&nbsp;j&nbsp;&nbsp;k&nbsp;&nbsp;j&nbsp;&nbsp;j&nbsp;&nbsp;j&nbsp;&nbsp;m</spanx></c>
<c>19</c>
<c><spanx style="vbare">i&nbsp;&nbsp;k&nbsp;&nbsp;l&nbsp;&nbsp;n&nbsp;&nbsp;l&nbsp;&nbsp;l&nbsp;&nbsp;k&nbsp;&nbsp;k&nbsp;&nbsp;k&nbsp;&nbsp;j&nbsp;&nbsp;j&nbsp;&nbsp;i&nbsp;&nbsp;i&nbsp;&nbsp;i&nbsp;&nbsp;i&nbsp;&nbsp;i</spanx></c>
<c>20</c>
<c><spanx style="vbare">l&nbsp;&nbsp;m&nbsp;&nbsp;l&nbsp;&nbsp;n&nbsp;&nbsp;l&nbsp;&nbsp;l&nbsp;&nbsp;k&nbsp;&nbsp;k&nbsp;&nbsp;j&nbsp;&nbsp;j&nbsp;&nbsp;j&nbsp;&nbsp;j&nbsp;&nbsp;j&nbsp;&nbsp;k&nbsp;&nbsp;k&nbsp;&nbsp;m</spanx></c>
<c>21</c>
<c><spanx style="vbare">k&nbsp;&nbsp;o&nbsp;&nbsp;l&nbsp;&nbsp;p&nbsp;&nbsp;p&nbsp;&nbsp;m&nbsp;&nbsp;n&nbsp;&nbsp;m&nbsp;&nbsp;n&nbsp;&nbsp;l&nbsp;&nbsp;n&nbsp;&nbsp;l&nbsp;&nbsp;l&nbsp;&nbsp;k&nbsp;&nbsp;l&nbsp;&nbsp;l</spanx></c>
<c>22</c>
<c><spanx style="vbare">k&nbsp;&nbsp;l&nbsp;&nbsp;n&nbsp;&nbsp;o&nbsp;&nbsp;o&nbsp;&nbsp;l&nbsp;&nbsp;n&nbsp;&nbsp;l&nbsp;&nbsp;m&nbsp;&nbsp;m&nbsp;&nbsp;l&nbsp;&nbsp;l&nbsp;&nbsp;l&nbsp;&nbsp;l&nbsp;&nbsp;k&nbsp;&nbsp;m</spanx></c>
<c>23</c>
<c><spanx style="vbare">j&nbsp;&nbsp;l&nbsp;&nbsp;l&nbsp;&nbsp;m&nbsp;&nbsp;m&nbsp;&nbsp;m&nbsp;&nbsp;m&nbsp;&nbsp;l&nbsp;&nbsp;n&nbsp;&nbsp;n&nbsp;&nbsp;n&nbsp;&nbsp;l&nbsp;&nbsp;j&nbsp;&nbsp;j&nbsp;&nbsp;j&nbsp;&nbsp;j</spanx></c>
<c>24</c>
<c><spanx style="vbare">k&nbsp;&nbsp;n&nbsp;&nbsp;l&nbsp;&nbsp;o&nbsp;&nbsp;o&nbsp;&nbsp;m&nbsp;&nbsp;p&nbsp;&nbsp;m&nbsp;&nbsp;m&nbsp;&nbsp;n&nbsp;&nbsp;l&nbsp;&nbsp;m&nbsp;&nbsp;m&nbsp;&nbsp;l&nbsp;&nbsp;l&nbsp;&nbsp;l</spanx></c>
<c>25</c>
<c><spanx style="vbare">i&nbsp;&nbsp;o&nbsp;&nbsp;j&nbsp;&nbsp;j&nbsp;&nbsp;i&nbsp;&nbsp;i&nbsp;&nbsp;i&nbsp;&nbsp;i&nbsp;&nbsp;i&nbsp;&nbsp;i&nbsp;&nbsp;i&nbsp;&nbsp;i&nbsp;&nbsp;i&nbsp;&nbsp;i&nbsp;&nbsp;i&nbsp;&nbsp;i</spanx></c>
<c>26</c>
<c><spanx style="vbare">i&nbsp;&nbsp;o&nbsp;&nbsp;o&nbsp;&nbsp;l&nbsp;&nbsp;n&nbsp;&nbsp;k&nbsp;&nbsp;n&nbsp;&nbsp;n&nbsp;&nbsp;l&nbsp;&nbsp;m&nbsp;&nbsp;m&nbsp;&nbsp;p&nbsp;&nbsp;p&nbsp;&nbsp;m&nbsp;&nbsp;m&nbsp;&nbsp;m</spanx></c>
<c>27</c>
<c><spanx style="vbare">l&nbsp;&nbsp;l&nbsp;&nbsp;p&nbsp;&nbsp;l&nbsp;&nbsp;n&nbsp;&nbsp;m&nbsp;&nbsp;l&nbsp;&nbsp;l&nbsp;&nbsp;l&nbsp;&nbsp;k&nbsp;&nbsp;k&nbsp;&nbsp;l&nbsp;&nbsp;l&nbsp;&nbsp;l&nbsp;&nbsp;k&nbsp;&nbsp;l</spanx></c>
<c>28</c>
<c><spanx style="vbare">i&nbsp;&nbsp;i&nbsp;&nbsp;j&nbsp;&nbsp;i&nbsp;&nbsp;i&nbsp;&nbsp;i&nbsp;&nbsp;k&nbsp;&nbsp;j&nbsp;&nbsp;k&nbsp;&nbsp;j&nbsp;&nbsp;j&nbsp;&nbsp;k&nbsp;&nbsp;k&nbsp;&nbsp;k&nbsp;&nbsp;j&nbsp;&nbsp;j</spanx></c>
<c>29</c>
<c><spanx style="vbare">i&nbsp;&nbsp;l&nbsp;&nbsp;k&nbsp;&nbsp;n&nbsp;&nbsp;l&nbsp;&nbsp;l&nbsp;&nbsp;k&nbsp;&nbsp;l&nbsp;&nbsp;k&nbsp;&nbsp;j&nbsp;&nbsp;i&nbsp;&nbsp;i&nbsp;&nbsp;j&nbsp;&nbsp;i&nbsp;&nbsp;i&nbsp;&nbsp;j</spanx></c>
<c>30</c>
<c><spanx style="vbare">l&nbsp;&nbsp;n&nbsp;&nbsp;n&nbsp;&nbsp;m&nbsp;&nbsp;p&nbsp;&nbsp;n&nbsp;&nbsp;l&nbsp;&nbsp;l&nbsp;&nbsp;k&nbsp;&nbsp;l&nbsp;&nbsp;k&nbsp;&nbsp;k&nbsp;&nbsp;j&nbsp;&nbsp;i&nbsp;&nbsp;j&nbsp;&nbsp;i</spanx></c>
<c>31</c>
<c><spanx style="vbare">k&nbsp;&nbsp;l&nbsp;&nbsp;n&nbsp;&nbsp;l&nbsp;&nbsp;m&nbsp;&nbsp;l&nbsp;&nbsp;l&nbsp;&nbsp;l&nbsp;&nbsp;k&nbsp;&nbsp;j&nbsp;&nbsp;k&nbsp;&nbsp;o&nbsp;&nbsp;m&nbsp;&nbsp;i&nbsp;&nbsp;i&nbsp;&nbsp;i</spanx></c>
</texttable>

<t>
Decoding the second stage residual proceeds as follows.
For each coefficient, the decoder reads a symbol using the PDF corresponding to
 I1 from either <xref target="silk_nlsf_nbmb_stage2_cb_sel"/> or
 <xref target="silk_nlsf_wb_stage2_cb_sel"/>, and subtracts 4 from the result
 to given an index in the range -4 to 4, inclusive.
If the index is either -4 or 4, it reads a second symbol using the PDF in
 <xref target="silk_nlsf_ext_pdf"/>, and adds the value of this second symbol
 to the index, using the same sign.
This gives the index, I2[k], a total range of -10 to 10, inclusive.
</t>

<texttable anchor="silk_nlsf_ext_pdf"
 title="PDF for Normalized LSF Index Extension Decoding">
<ttcol align="left">PDF</ttcol>
<c>{156, 60, 24,  9,  4,  2,  1}/256</c>
</texttable>

<t>
The decoded indices from both stages are translated back into normalized LSF
 coefficients in silk_NLSF_decode() (silk_NLSF_decode.c).
The stage-2 indices represent residuals after both the first stage of the VQ
 and a separate backwards-prediction step.
The backwards prediction process in the encoder subtracts a prediction from
 each residual formed by a multiple of the coefficient that follows it.
The decoder must undo this process.
<xref target="silk_nlsf_pred_weights"/> contains lists of prediction weights
 for each coefficient.
There are two lists for NB and MB, and another two lists for WB, giving two
 possible prediction weights for each coefficient.
</t>

<texttable anchor="silk_nlsf_pred_weights"
 title="Prediction Weights for Normalized LSF Decoding">
<ttcol align="left">Coefficient</ttcol>
<ttcol align="right">A</ttcol>
<ttcol align="right">B</ttcol>
<ttcol align="right">C</ttcol>
<ttcol align="right">D</ttcol>
 <c>0</c> <c>179</c> <c>116</c> <c>175</c>  <c>68</c>
 <c>1</c> <c>138</c>  <c>67</c> <c>148</c>  <c>62</c>
 <c>2</c> <c>140</c>  <c>82</c> <c>160</c>  <c>66</c>
 <c>3</c> <c>148</c>  <c>59</c> <c>176</c>  <c>60</c>
 <c>4</c> <c>151</c>  <c>92</c> <c>178</c>  <c>72</c>
 <c>5</c> <c>149</c>  <c>72</c> <c>173</c> <c>117</c>
 <c>6</c> <c>153</c> <c>100</c> <c>174</c>  <c>85</c>
 <c>7</c> <c>151</c>  <c>89</c> <c>164</c>  <c>90</c>
 <c>8</c> <c>163</c>  <c>92</c> <c>177</c> <c>118</c>
 <c>9</c> <c/>        <c/>      <c>174</c> <c>136</c>
<c>10</c> <c/>        <c/>      <c>196</c> <c>151</c>
<c>11</c> <c/>        <c/>      <c>182</c> <c>142</c>
<c>12</c> <c/>        <c/>      <c>198</c> <c>160</c>
<c>13</c> <c/>        <c/>      <c>192</c> <c>142</c>
<c>14</c> <c/>        <c/>      <c>182</c> <c>155</c>
</texttable>

<t>
The prediction is undone using the procedure implemented in
 silk_NLSF_residual_dequant() (silk_NLSF_decode.c), which is as follows.
Each coefficient selects its prediction weight from one of the two lists based
 on the stage-1 index, I1.
<xref target="silk_nlsf_nbmb_weight_sel"/> gives the selections for each
 coefficient for NB and MB, and <xref target="silk_nlsf_wb_weight_sel"/> gives
 the selections for WB.
Let d_LPC be the order of the codebook, i.e., 10 for NB and MB, and 16 for WB,
 and let pred_Q8[k] be the weight for the <spanx style="emph">k</spanx>th
 coefficient selected by this process for
 0&nbsp;&lt;=&nbsp;k&nbsp;&lt;&nbsp;d_LPC-1.
Then, the stage-2 residual for each coefficient is computed via
<figure align="center">
<artwork align="center"><![CDATA[
  res_Q10[k] = (k+1 < d_LPC ? (res_Q10[k+1]*pred_Q8[k])>>8 : 0)
               + ((((I2[k]<<10) + sign(I2[k])*102)*qstep)>>16) ,
]]></artwork>
</figure>
 where qstep is the Q16 quantization step size, which is 11796 for NB and MB
 and 9830 for WB (representing step sizes of approximately 0.18 and 0.15,
 respectively).
</t>

<texttable anchor="silk_nlsf_nbmb_weight_sel"
 title="Prediction Weight Selection for NB/MB Normalized LSF Decoding">
<ttcol>I1</ttcol>
<ttcol>Coefficient</ttcol>
<c/>
<c><spanx style="vbare">0&nbsp;1&nbsp;2&nbsp;3&nbsp;4&nbsp;5&nbsp;6&nbsp;7&nbsp;8</spanx></c>
<c> 0</c>
<c><spanx style="vbare">A&nbsp;B&nbsp;A&nbsp;A&nbsp;A&nbsp;A&nbsp;A&nbsp;A&nbsp;A</spanx></c>
<c> 1</c>
<c><spanx style="vbare">B&nbsp;A&nbsp;A&nbsp;A&nbsp;A&nbsp;A&nbsp;A&nbsp;A&nbsp;A</spanx></c>
<c> 2</c>
<c><spanx style="vbare">A&nbsp;A&nbsp;A&nbsp;A&nbsp;A&nbsp;A&nbsp;A&nbsp;A&nbsp;A</spanx></c>
<c> 3</c>
<c><spanx style="vbare">B&nbsp;B&nbsp;B&nbsp;A&nbsp;A&nbsp;A&nbsp;A&nbsp;B&nbsp;A</spanx></c>
<c> 4</c>
<c><spanx style="vbare">A&nbsp;B&nbsp;A&nbsp;A&nbsp;A&nbsp;A&nbsp;A&nbsp;A&nbsp;A</spanx></c>
<c> 5</c>
<c><spanx style="vbare">A&nbsp;B&nbsp;A&nbsp;A&nbsp;A&nbsp;A&nbsp;A&nbsp;A&nbsp;A</spanx></c>
<c> 6</c>
<c><spanx style="vbare">B&nbsp;A&nbsp;B&nbsp;B&nbsp;A&nbsp;A&nbsp;A&nbsp;B&nbsp;A</spanx></c>
<c> 7</c>
<c><spanx style="vbare">A&nbsp;B&nbsp;B&nbsp;A&nbsp;A&nbsp;B&nbsp;B&nbsp;A&nbsp;A</spanx></c>
<c> 8</c>
<c><spanx style="vbare">A&nbsp;A&nbsp;B&nbsp;B&nbsp;A&nbsp;B&nbsp;A&nbsp;B&nbsp;B</spanx></c>
<c> 9</c>
<c><spanx style="vbare">A&nbsp;A&nbsp;B&nbsp;B&nbsp;A&nbsp;A&nbsp;B&nbsp;B&nbsp;B</spanx></c>
<c>10</c>
<c><spanx style="vbare">A&nbsp;A&nbsp;A&nbsp;A&nbsp;A&nbsp;A&nbsp;A&nbsp;A&nbsp;A</spanx></c>
<c>11</c>
<c><spanx style="vbare">A&nbsp;B&nbsp;A&nbsp;B&nbsp;B&nbsp;B&nbsp;B&nbsp;B&nbsp;A</spanx></c>
<c>12</c>
<c><spanx style="vbare">A&nbsp;B&nbsp;A&nbsp;B&nbsp;B&nbsp;B&nbsp;B&nbsp;B&nbsp;A</spanx></c>
<c>13</c>
<c><spanx style="vbare">A&nbsp;B&nbsp;B&nbsp;B&nbsp;B&nbsp;B&nbsp;B&nbsp;B&nbsp;A</spanx></c>
<c>14</c>
<c><spanx style="vbare">B&nbsp;A&nbsp;B&nbsp;B&nbsp;A&nbsp;B&nbsp;B&nbsp;B&nbsp;B</spanx></c>
<c>15</c>
<c><spanx style="vbare">A&nbsp;B&nbsp;B&nbsp;B&nbsp;B&nbsp;B&nbsp;A&nbsp;B&nbsp;A</spanx></c>
<c>16</c>
<c><spanx style="vbare">A&nbsp;A&nbsp;B&nbsp;B&nbsp;A&nbsp;B&nbsp;A&nbsp;B&nbsp;A</spanx></c>
<c>17</c>
<c><spanx style="vbare">A&nbsp;A&nbsp;B&nbsp;B&nbsp;B&nbsp;A&nbsp;B&nbsp;B&nbsp;B</spanx></c>
<c>18</c>
<c><spanx style="vbare">A&nbsp;B&nbsp;B&nbsp;A&nbsp;A&nbsp;B&nbsp;B&nbsp;B&nbsp;A</spanx></c>
<c>19</c>
<c><spanx style="vbare">A&nbsp;A&nbsp;A&nbsp;B&nbsp;B&nbsp;B&nbsp;A&nbsp;B&nbsp;A</spanx></c>
<c>20</c>
<c><spanx style="vbare">A&nbsp;B&nbsp;B&nbsp;A&nbsp;A&nbsp;B&nbsp;A&nbsp;B&nbsp;A</spanx></c>
<c>21</c>
<c><spanx style="vbare">A&nbsp;B&nbsp;B&nbsp;A&nbsp;A&nbsp;A&nbsp;B&nbsp;B&nbsp;A</spanx></c>
<c>22</c>
<c><spanx style="vbare">A&nbsp;A&nbsp;A&nbsp;A&nbsp;A&nbsp;B&nbsp;B&nbsp;B&nbsp;B</spanx></c>
<c>23</c>
<c><spanx style="vbare">A&nbsp;A&nbsp;B&nbsp;B&nbsp;A&nbsp;A&nbsp;A&nbsp;B&nbsp;B</spanx></c>
<c>24</c>
<c><spanx style="vbare">A&nbsp;A&nbsp;A&nbsp;B&nbsp;A&nbsp;B&nbsp;B&nbsp;B&nbsp;B</spanx></c>
<c>25</c>
<c><spanx style="vbare">A&nbsp;B&nbsp;B&nbsp;B&nbsp;B&nbsp;B&nbsp;B&nbsp;B&nbsp;A</spanx></c>
<c>26</c>
<c><spanx style="vbare">A&nbsp;A&nbsp;A&nbsp;A&nbsp;A&nbsp;A&nbsp;A&nbsp;A&nbsp;A</spanx></c>
<c>27</c>
<c><spanx style="vbare">A&nbsp;A&nbsp;A&nbsp;A&nbsp;A&nbsp;A&nbsp;A&nbsp;A&nbsp;A</spanx></c>
<c>28</c>
<c><spanx style="vbare">A&nbsp;A&nbsp;B&nbsp;A&nbsp;B&nbsp;B&nbsp;A&nbsp;B&nbsp;A</spanx></c>
<c>29</c>
<c><spanx style="vbare">A&nbsp;A&nbsp;A&nbsp;B&nbsp;A&nbsp;A&nbsp;A&nbsp;A&nbsp;A</spanx></c>
<c>30</c>
<c><spanx style="vbare">A&nbsp;A&nbsp;A&nbsp;B&nbsp;B&nbsp;A&nbsp;B&nbsp;A&nbsp;B</spanx></c>
<c>31</c>
<c><spanx style="vbare">B&nbsp;A&nbsp;B&nbsp;B&nbsp;A&nbsp;B&nbsp;B&nbsp;B&nbsp;B</spanx></c>
</texttable>

<texttable anchor="silk_nlsf_wb_weight_sel"
 title="Prediction Weight Selection for WB Normalized LSF Decoding">
<ttcol>I1</ttcol>
<ttcol>Coefficient</ttcol>
<c/>
<c><spanx style="vbare">0&nbsp;&nbsp;1&nbsp;&nbsp;2&nbsp;&nbsp;3&nbsp;&nbsp;4&nbsp;&nbsp;5&nbsp;&nbsp;6&nbsp;&nbsp;7&nbsp;&nbsp;8&nbsp;&nbsp;9&nbsp;10&nbsp;11&nbsp;12&nbsp;13&nbsp;14</spanx></c>
<c> 0</c>
<c><spanx style="vbare">C&nbsp;&nbsp;C&nbsp;&nbsp;C&nbsp;&nbsp;C&nbsp;&nbsp;C&nbsp;&nbsp;C&nbsp;&nbsp;C&nbsp;&nbsp;C&nbsp;&nbsp;C&nbsp;&nbsp;C&nbsp;&nbsp;C&nbsp;&nbsp;C&nbsp;&nbsp;C&nbsp;&nbsp;C&nbsp;&nbsp;D</spanx></c>
<c> 1</c>
<c><spanx style="vbare">C&nbsp;&nbsp;C&nbsp;&nbsp;C&nbsp;&nbsp;C&nbsp;&nbsp;C&nbsp;&nbsp;C&nbsp;&nbsp;C&nbsp;&nbsp;C&nbsp;&nbsp;C&nbsp;&nbsp;C&nbsp;&nbsp;C&nbsp;&nbsp;C&nbsp;&nbsp;C&nbsp;&nbsp;C&nbsp;&nbsp;C</spanx></c>
<c> 2</c>
<c><spanx style="vbare">C&nbsp;&nbsp;C&nbsp;&nbsp;D&nbsp;&nbsp;C&nbsp;&nbsp;C&nbsp;&nbsp;D&nbsp;&nbsp;D&nbsp;&nbsp;D&nbsp;&nbsp;C&nbsp;&nbsp;D&nbsp;&nbsp;D&nbsp;&nbsp;D&nbsp;&nbsp;D&nbsp;&nbsp;C&nbsp;&nbsp;C</spanx></c>
<c> 3</c>
<c><spanx style="vbare">C&nbsp;&nbsp;C&nbsp;&nbsp;C&nbsp;&nbsp;C&nbsp;&nbsp;C&nbsp;&nbsp;C&nbsp;&nbsp;C&nbsp;&nbsp;C&nbsp;&nbsp;C&nbsp;&nbsp;C&nbsp;&nbsp;C&nbsp;&nbsp;C&nbsp;&nbsp;D&nbsp;&nbsp;C&nbsp;&nbsp;C</spanx></c>
<c> 4</c>
<c><spanx style="vbare">C&nbsp;&nbsp;D&nbsp;&nbsp;D&nbsp;&nbsp;C&nbsp;&nbsp;D&nbsp;&nbsp;C&nbsp;&nbsp;D&nbsp;&nbsp;D&nbsp;&nbsp;C&nbsp;&nbsp;D&nbsp;&nbsp;D&nbsp;&nbsp;D&nbsp;&nbsp;D&nbsp;&nbsp;D&nbsp;&nbsp;C</spanx></c>
<c> 5</c>
<c><spanx style="vbare">C&nbsp;&nbsp;D&nbsp;&nbsp;C&nbsp;&nbsp;C&nbsp;&nbsp;C&nbsp;&nbsp;C&nbsp;&nbsp;C&nbsp;&nbsp;C&nbsp;&nbsp;C&nbsp;&nbsp;C&nbsp;&nbsp;C&nbsp;&nbsp;C&nbsp;&nbsp;C&nbsp;&nbsp;C&nbsp;&nbsp;C</spanx></c>
<c> 6</c>
<c><spanx style="vbare">D&nbsp;&nbsp;C&nbsp;&nbsp;C&nbsp;&nbsp;C&nbsp;&nbsp;C&nbsp;&nbsp;C&nbsp;&nbsp;C&nbsp;&nbsp;C&nbsp;&nbsp;C&nbsp;&nbsp;C&nbsp;&nbsp;C&nbsp;&nbsp;D&nbsp;&nbsp;C&nbsp;&nbsp;D&nbsp;&nbsp;C</spanx></c>
<c> 7</c>
<c><spanx style="vbare">C&nbsp;&nbsp;D&nbsp;&nbsp;D&nbsp;&nbsp;C&nbsp;&nbsp;C&nbsp;&nbsp;C&nbsp;&nbsp;D&nbsp;&nbsp;C&nbsp;&nbsp;D&nbsp;&nbsp;D&nbsp;&nbsp;D&nbsp;&nbsp;C&nbsp;&nbsp;D&nbsp;&nbsp;C&nbsp;&nbsp;D</spanx></c>
<c> 8</c>
<c><spanx style="vbare">C&nbsp;&nbsp;D&nbsp;&nbsp;C&nbsp;&nbsp;D&nbsp;&nbsp;D&nbsp;&nbsp;C&nbsp;&nbsp;D&nbsp;&nbsp;C&nbsp;&nbsp;D&nbsp;&nbsp;C&nbsp;&nbsp;D&nbsp;&nbsp;D&nbsp;&nbsp;D&nbsp;&nbsp;D&nbsp;&nbsp;D</spanx></c>
<c> 9</c>
<c><spanx style="vbare">C&nbsp;&nbsp;C&nbsp;&nbsp;C&nbsp;&nbsp;C&nbsp;&nbsp;C&nbsp;&nbsp;C&nbsp;&nbsp;C&nbsp;&nbsp;C&nbsp;&nbsp;C&nbsp;&nbsp;C&nbsp;&nbsp;C&nbsp;&nbsp;C&nbsp;&nbsp;C&nbsp;&nbsp;C&nbsp;&nbsp;D</spanx></c>
<c>10</c>
<c><spanx style="vbare">C&nbsp;&nbsp;D&nbsp;&nbsp;C&nbsp;&nbsp;C&nbsp;&nbsp;C&nbsp;&nbsp;C&nbsp;&nbsp;C&nbsp;&nbsp;C&nbsp;&nbsp;C&nbsp;&nbsp;C&nbsp;&nbsp;C&nbsp;&nbsp;C&nbsp;&nbsp;C&nbsp;&nbsp;C&nbsp;&nbsp;C</spanx></c>
<c>11</c>
<c><spanx style="vbare">C&nbsp;&nbsp;C&nbsp;&nbsp;D&nbsp;&nbsp;C&nbsp;&nbsp;D&nbsp;&nbsp;D&nbsp;&nbsp;D&nbsp;&nbsp;D&nbsp;&nbsp;D&nbsp;&nbsp;D&nbsp;&nbsp;D&nbsp;&nbsp;C&nbsp;&nbsp;D&nbsp;&nbsp;C&nbsp;&nbsp;C</spanx></c>
<c>12</c>
<c><spanx style="vbare">C&nbsp;&nbsp;C&nbsp;&nbsp;D&nbsp;&nbsp;C&nbsp;&nbsp;C&nbsp;&nbsp;D&nbsp;&nbsp;C&nbsp;&nbsp;D&nbsp;&nbsp;C&nbsp;&nbsp;D&nbsp;&nbsp;C&nbsp;&nbsp;C&nbsp;&nbsp;D&nbsp;&nbsp;C&nbsp;&nbsp;C</spanx></c>
<c>13</c>
<c><spanx style="vbare">C&nbsp;&nbsp;C&nbsp;&nbsp;C&nbsp;&nbsp;C&nbsp;&nbsp;D&nbsp;&nbsp;D&nbsp;&nbsp;C&nbsp;&nbsp;D&nbsp;&nbsp;C&nbsp;&nbsp;D&nbsp;&nbsp;D&nbsp;&nbsp;D&nbsp;&nbsp;D&nbsp;&nbsp;C&nbsp;&nbsp;C</spanx></c>
<c>14</c>
<c><spanx style="vbare">C&nbsp;&nbsp;D&nbsp;&nbsp;C&nbsp;&nbsp;C&nbsp;&nbsp;C&nbsp;&nbsp;D&nbsp;&nbsp;D&nbsp;&nbsp;C&nbsp;&nbsp;D&nbsp;&nbsp;D&nbsp;&nbsp;D&nbsp;&nbsp;C&nbsp;&nbsp;D&nbsp;&nbsp;D&nbsp;&nbsp;D</spanx></c>
<c>15</c>
<c><spanx style="vbare">C&nbsp;&nbsp;C&nbsp;&nbsp;D&nbsp;&nbsp;D&nbsp;&nbsp;C&nbsp;&nbsp;C&nbsp;&nbsp;C&nbsp;&nbsp;C&nbsp;&nbsp;C&nbsp;&nbsp;C&nbsp;&nbsp;C&nbsp;&nbsp;C&nbsp;&nbsp;D&nbsp;&nbsp;D&nbsp;&nbsp;C</spanx></c>
<c>16</c>
<c><spanx style="vbare">C&nbsp;&nbsp;D&nbsp;&nbsp;D&nbsp;&nbsp;C&nbsp;&nbsp;D&nbsp;&nbsp;C&nbsp;&nbsp;D&nbsp;&nbsp;D&nbsp;&nbsp;D&nbsp;&nbsp;D&nbsp;&nbsp;D&nbsp;&nbsp;C&nbsp;&nbsp;D&nbsp;&nbsp;C&nbsp;&nbsp;C</spanx></c>
<c>17</c>
<c><spanx style="vbare">C&nbsp;&nbsp;C&nbsp;&nbsp;D&nbsp;&nbsp;C&nbsp;&nbsp;C&nbsp;&nbsp;C&nbsp;&nbsp;C&nbsp;&nbsp;D&nbsp;&nbsp;C&nbsp;&nbsp;C&nbsp;&nbsp;D&nbsp;&nbsp;D&nbsp;&nbsp;D&nbsp;&nbsp;C&nbsp;&nbsp;C</spanx></c>
<c>18</c>
<c><spanx style="vbare">C&nbsp;&nbsp;C&nbsp;&nbsp;C&nbsp;&nbsp;C&nbsp;&nbsp;C&nbsp;&nbsp;C&nbsp;&nbsp;C&nbsp;&nbsp;C&nbsp;&nbsp;C&nbsp;&nbsp;C&nbsp;&nbsp;C&nbsp;&nbsp;C&nbsp;&nbsp;C&nbsp;&nbsp;C&nbsp;&nbsp;D</spanx></c>
<c>19</c>
<c><spanx style="vbare">C&nbsp;&nbsp;C&nbsp;&nbsp;C&nbsp;&nbsp;C&nbsp;&nbsp;C&nbsp;&nbsp;C&nbsp;&nbsp;C&nbsp;&nbsp;C&nbsp;&nbsp;C&nbsp;&nbsp;C&nbsp;&nbsp;C&nbsp;&nbsp;C&nbsp;&nbsp;D&nbsp;&nbsp;C&nbsp;&nbsp;C</spanx></c>
<c>20</c>
<c><spanx style="vbare">C&nbsp;&nbsp;C&nbsp;&nbsp;C&nbsp;&nbsp;C&nbsp;&nbsp;C&nbsp;&nbsp;C&nbsp;&nbsp;C&nbsp;&nbsp;C&nbsp;&nbsp;C&nbsp;&nbsp;C&nbsp;&nbsp;C&nbsp;&nbsp;C&nbsp;&nbsp;C&nbsp;&nbsp;C&nbsp;&nbsp;C</spanx></c>
<c>21</c>
<c><spanx style="vbare">C&nbsp;&nbsp;D&nbsp;&nbsp;C&nbsp;&nbsp;D&nbsp;&nbsp;C&nbsp;&nbsp;D&nbsp;&nbsp;D&nbsp;&nbsp;C&nbsp;&nbsp;D&nbsp;&nbsp;C&nbsp;&nbsp;D&nbsp;&nbsp;C&nbsp;&nbsp;D&nbsp;&nbsp;D&nbsp;&nbsp;C</spanx></c>
<c>22</c>
<c><spanx style="vbare">C&nbsp;&nbsp;C&nbsp;&nbsp;D&nbsp;&nbsp;D&nbsp;&nbsp;D&nbsp;&nbsp;D&nbsp;&nbsp;C&nbsp;&nbsp;D&nbsp;&nbsp;D&nbsp;&nbsp;C&nbsp;&nbsp;C&nbsp;&nbsp;D&nbsp;&nbsp;D&nbsp;&nbsp;C&nbsp;&nbsp;C</spanx></c>
<c>23</c>
<c><spanx style="vbare">C&nbsp;&nbsp;D&nbsp;&nbsp;D&nbsp;&nbsp;C&nbsp;&nbsp;D&nbsp;&nbsp;C&nbsp;&nbsp;D&nbsp;&nbsp;C&nbsp;&nbsp;D&nbsp;&nbsp;C&nbsp;&nbsp;C&nbsp;&nbsp;C&nbsp;&nbsp;C&nbsp;&nbsp;D&nbsp;&nbsp;C</spanx></c>
<c>24</c>
<c><spanx style="vbare">C&nbsp;&nbsp;C&nbsp;&nbsp;C&nbsp;&nbsp;D&nbsp;&nbsp;D&nbsp;&nbsp;C&nbsp;&nbsp;D&nbsp;&nbsp;C&nbsp;&nbsp;D&nbsp;&nbsp;D&nbsp;&nbsp;D&nbsp;&nbsp;D&nbsp;&nbsp;D&nbsp;&nbsp;D&nbsp;&nbsp;D</spanx></c>
<c>25</c>
<c><spanx style="vbare">C&nbsp;&nbsp;C&nbsp;&nbsp;C&nbsp;&nbsp;C&nbsp;&nbsp;C&nbsp;&nbsp;C&nbsp;&nbsp;C&nbsp;&nbsp;C&nbsp;&nbsp;C&nbsp;&nbsp;C&nbsp;&nbsp;C&nbsp;&nbsp;C&nbsp;&nbsp;C&nbsp;&nbsp;C&nbsp;&nbsp;D</spanx></c>
<c>26</c>
<c><spanx style="vbare">C&nbsp;&nbsp;D&nbsp;&nbsp;D&nbsp;&nbsp;C&nbsp;&nbsp;C&nbsp;&nbsp;C&nbsp;&nbsp;D&nbsp;&nbsp;D&nbsp;&nbsp;C&nbsp;&nbsp;C&nbsp;&nbsp;D&nbsp;&nbsp;D&nbsp;&nbsp;D&nbsp;&nbsp;D&nbsp;&nbsp;D</spanx></c>
<c>27</c>
<c><spanx style="vbare">C&nbsp;&nbsp;C&nbsp;&nbsp;C&nbsp;&nbsp;C&nbsp;&nbsp;C&nbsp;&nbsp;D&nbsp;&nbsp;C&nbsp;&nbsp;D&nbsp;&nbsp;D&nbsp;&nbsp;D&nbsp;&nbsp;D&nbsp;&nbsp;C&nbsp;&nbsp;D&nbsp;&nbsp;D&nbsp;&nbsp;D</spanx></c>
<c>28</c>
<c><spanx style="vbare">C&nbsp;&nbsp;C&nbsp;&nbsp;C&nbsp;&nbsp;C&nbsp;&nbsp;C&nbsp;&nbsp;C&nbsp;&nbsp;C&nbsp;&nbsp;C&nbsp;&nbsp;C&nbsp;&nbsp;C&nbsp;&nbsp;C&nbsp;&nbsp;C&nbsp;&nbsp;C&nbsp;&nbsp;C&nbsp;&nbsp;D</spanx></c>
<c>29</c>
<c><spanx style="vbare">C&nbsp;&nbsp;C&nbsp;&nbsp;C&nbsp;&nbsp;C&nbsp;&nbsp;C&nbsp;&nbsp;C&nbsp;&nbsp;C&nbsp;&nbsp;C&nbsp;&nbsp;C&nbsp;&nbsp;C&nbsp;&nbsp;C&nbsp;&nbsp;C&nbsp;&nbsp;C&nbsp;&nbsp;C&nbsp;&nbsp;D</spanx></c>
<c>30</c>
<c><spanx style="vbare">D&nbsp;&nbsp;C&nbsp;&nbsp;C&nbsp;&nbsp;C&nbsp;&nbsp;C&nbsp;&nbsp;C&nbsp;&nbsp;C&nbsp;&nbsp;C&nbsp;&nbsp;C&nbsp;&nbsp;C&nbsp;&nbsp;C&nbsp;&nbsp;D&nbsp;&nbsp;C&nbsp;&nbsp;C&nbsp;&nbsp;C</spanx></c>
<c>31</c>
<c><spanx style="vbare">C&nbsp;&nbsp;C&nbsp;&nbsp;D&nbsp;&nbsp;C&nbsp;&nbsp;C&nbsp;&nbsp;D&nbsp;&nbsp;D&nbsp;&nbsp;D&nbsp;&nbsp;C&nbsp;&nbsp;C&nbsp;&nbsp;D&nbsp;&nbsp;C&nbsp;&nbsp;C&nbsp;&nbsp;D&nbsp;&nbsp;C</spanx></c>
</texttable>

<t>
The spectral distortion introduced by the quantization of each LSF coefficient
 varies, so the stage-2 residual is weighted accordingly, using the
 low-complexity weighting function proposed in <xref target="laroia-icassp"/>.
The weights are derived directly from the stage-1 codebook vector.
Let cb1_Q8[k] be the <spanx style="emph">k</spanx>th entry of the stage-1
 codebook vector from <xref target="silk_nlsf_nbmb_codebook"/> or
 <xref target="silk_nlsf_wb_codebook"/>.
Then for 0&nbsp;&lt;=&nbsp;k&nbsp;&lt;&nbsp;d_LPC the following expression
 computes the square of the weight as a Q18 value:
<figure align="center">
<artwork align="center">
<![CDATA[
w2_Q18[k] = (1024/(cb1_Q8[k] - cb1_Q8[k-1])
             + 1024/(cb1_Q8[k+1] - cb1_Q8[k])) << 16 ,
]]>
</artwork>
</figure>
 where cb1_Q8[-1]&nbsp;=&nbsp;0 and cb1_Q8[d_LPC]&nbsp;=&nbsp;256, and the
 division is exact integer division.
This is reduced to an unsquared, Q9 value using the following square-root
 approximation:
<figure align="center">
<artwork align="center"><![CDATA[
i = ilog(w2_Q18[k])
f = (w2_Q18[k]>>(i-8)) & 127
y = ((i&1) ? 32768 : 46214) >> ((32-i)>>1)
w_Q9[k] = y + ((213*f*y)>>16)
]]></artwork>
</figure>
The cb1_Q8[] vector completely determines these weights, and they may be
 tabulated and stored as 13-bit unsigned values (with a range of 1819 to 5227)
 to avoid computing them when decoding.
The reference implementation computes them on the fly in
 silk_NLSF_VQ_weights_laroia() (silk_NLSF_VQ_weights_laroia.c) and its
 caller, to reduce the amount of ROM required.
</t>

<texttable anchor="silk_nlsf_nbmb_codebook"
           title="Codebook Vectors for NB/MB Normalized LSF Stage 1 Decoding">
<ttcol>I1</ttcol>
<ttcol>Codebook</ttcol>
<c/>
<c><spanx style="vbare">&nbsp;0&nbsp;&nbsp;&nbsp;1&nbsp;&nbsp;&nbsp;2&nbsp;&nbsp;&nbsp;3&nbsp;&nbsp;&nbsp;4&nbsp;&nbsp;&nbsp;5&nbsp;&nbsp;&nbsp;6&nbsp;&nbsp;&nbsp;7&nbsp;&nbsp;&nbsp;8&nbsp;&nbsp;&nbsp;9</spanx></c>
<c>0</c>
<c><spanx style="vbare">12&nbsp;&nbsp;35&nbsp;&nbsp;60&nbsp;&nbsp;83&nbsp;108&nbsp;132&nbsp;157&nbsp;180&nbsp;206&nbsp;228</spanx></c>
<c>1</c>
<c><spanx style="vbare">15&nbsp;&nbsp;32&nbsp;&nbsp;55&nbsp;&nbsp;77&nbsp;101&nbsp;125&nbsp;151&nbsp;175&nbsp;201&nbsp;225</spanx></c>
<c>2</c>
<c><spanx style="vbare">19&nbsp;&nbsp;42&nbsp;&nbsp;66&nbsp;&nbsp;89&nbsp;114&nbsp;137&nbsp;162&nbsp;184&nbsp;209&nbsp;230</spanx></c>
<c>3</c>
<c><spanx style="vbare">12&nbsp;&nbsp;25&nbsp;&nbsp;50&nbsp;&nbsp;72&nbsp;&nbsp;97&nbsp;120&nbsp;147&nbsp;172&nbsp;200&nbsp;223</spanx></c>
<c>4</c>
<c><spanx style="vbare">26&nbsp;&nbsp;44&nbsp;&nbsp;69&nbsp;&nbsp;90&nbsp;114&nbsp;135&nbsp;159&nbsp;180&nbsp;205&nbsp;225</spanx></c>
<c>5</c>
<c><spanx style="vbare">13&nbsp;&nbsp;22&nbsp;&nbsp;53&nbsp;&nbsp;80&nbsp;106&nbsp;130&nbsp;156&nbsp;180&nbsp;205&nbsp;228</spanx></c>
<c>6</c>
<c><spanx style="vbare">15&nbsp;&nbsp;25&nbsp;&nbsp;44&nbsp;&nbsp;64&nbsp;&nbsp;90&nbsp;115&nbsp;142&nbsp;168&nbsp;196&nbsp;222</spanx></c>
<c>7</c>
<c><spanx style="vbare">19&nbsp;&nbsp;24&nbsp;&nbsp;62&nbsp;&nbsp;82&nbsp;100&nbsp;120&nbsp;145&nbsp;168&nbsp;190&nbsp;214</spanx></c>
<c>8</c>
<c><spanx style="vbare">22&nbsp;&nbsp;31&nbsp;&nbsp;50&nbsp;&nbsp;79&nbsp;103&nbsp;120&nbsp;151&nbsp;170&nbsp;203&nbsp;227</spanx></c>
<c>9</c>
<c><spanx style="vbare">21&nbsp;&nbsp;29&nbsp;&nbsp;45&nbsp;&nbsp;65&nbsp;106&nbsp;124&nbsp;150&nbsp;171&nbsp;196&nbsp;224</spanx></c>
<c>10</c>
<c><spanx style="vbare">30&nbsp;&nbsp;49&nbsp;&nbsp;75&nbsp;&nbsp;97&nbsp;121&nbsp;142&nbsp;165&nbsp;186&nbsp;209&nbsp;229</spanx></c>
<c>11</c>
<c><spanx style="vbare">19&nbsp;&nbsp;25&nbsp;&nbsp;52&nbsp;&nbsp;70&nbsp;&nbsp;93&nbsp;116&nbsp;143&nbsp;166&nbsp;192&nbsp;219</spanx></c>
<c>12</c>
<c><spanx style="vbare">26&nbsp;&nbsp;34&nbsp;&nbsp;62&nbsp;&nbsp;75&nbsp;&nbsp;97&nbsp;118&nbsp;145&nbsp;167&nbsp;194&nbsp;217</spanx></c>
<c>13</c>
<c><spanx style="vbare">25&nbsp;&nbsp;33&nbsp;&nbsp;56&nbsp;&nbsp;70&nbsp;&nbsp;91&nbsp;113&nbsp;143&nbsp;165&nbsp;196&nbsp;223</spanx></c>
<c>14</c>
<c><spanx style="vbare">21&nbsp;&nbsp;34&nbsp;&nbsp;51&nbsp;&nbsp;72&nbsp;&nbsp;97&nbsp;117&nbsp;145&nbsp;171&nbsp;196&nbsp;222</spanx></c>
<c>15</c>
<c><spanx style="vbare">20&nbsp;&nbsp;29&nbsp;&nbsp;50&nbsp;&nbsp;67&nbsp;&nbsp;90&nbsp;117&nbsp;144&nbsp;168&nbsp;197&nbsp;221</spanx></c>
<c>16</c>
<c><spanx style="vbare">22&nbsp;&nbsp;31&nbsp;&nbsp;48&nbsp;&nbsp;66&nbsp;&nbsp;95&nbsp;117&nbsp;146&nbsp;168&nbsp;196&nbsp;222</spanx></c>
<c>17</c>
<c><spanx style="vbare">24&nbsp;&nbsp;33&nbsp;&nbsp;51&nbsp;&nbsp;77&nbsp;116&nbsp;134&nbsp;158&nbsp;180&nbsp;200&nbsp;224</spanx></c>
<c>18</c>
<c><spanx style="vbare">21&nbsp;&nbsp;28&nbsp;&nbsp;70&nbsp;&nbsp;87&nbsp;106&nbsp;124&nbsp;149&nbsp;170&nbsp;194&nbsp;217</spanx></c>
<c>19</c>
<c><spanx style="vbare">26&nbsp;&nbsp;33&nbsp;&nbsp;53&nbsp;&nbsp;64&nbsp;&nbsp;83&nbsp;117&nbsp;152&nbsp;173&nbsp;204&nbsp;225</spanx></c>
<c>20</c>
<c><spanx style="vbare">27&nbsp;&nbsp;34&nbsp;&nbsp;65&nbsp;&nbsp;95&nbsp;108&nbsp;129&nbsp;155&nbsp;174&nbsp;210&nbsp;225</spanx></c>
<c>21</c>
<c><spanx style="vbare">20&nbsp;&nbsp;26&nbsp;&nbsp;72&nbsp;&nbsp;99&nbsp;113&nbsp;131&nbsp;154&nbsp;176&nbsp;200&nbsp;219</spanx></c>
<c>22</c>
<c><spanx style="vbare">34&nbsp;&nbsp;43&nbsp;&nbsp;61&nbsp;&nbsp;78&nbsp;&nbsp;93&nbsp;114&nbsp;155&nbsp;177&nbsp;205&nbsp;229</spanx></c>
<c>23</c>
<c><spanx style="vbare">23&nbsp;&nbsp;29&nbsp;&nbsp;54&nbsp;&nbsp;97&nbsp;124&nbsp;138&nbsp;163&nbsp;179&nbsp;209&nbsp;229</spanx></c>
<c>24</c>
<c><spanx style="vbare">30&nbsp;&nbsp;38&nbsp;&nbsp;56&nbsp;&nbsp;89&nbsp;118&nbsp;129&nbsp;158&nbsp;178&nbsp;200&nbsp;231</spanx></c>
<c>25</c>
<c><spanx style="vbare">21&nbsp;&nbsp;29&nbsp;&nbsp;49&nbsp;&nbsp;63&nbsp;&nbsp;85&nbsp;111&nbsp;142&nbsp;163&nbsp;193&nbsp;222</spanx></c>
<c>26</c>
<c><spanx style="vbare">27&nbsp;&nbsp;48&nbsp;&nbsp;77&nbsp;103&nbsp;133&nbsp;158&nbsp;179&nbsp;196&nbsp;215&nbsp;232</spanx></c>
<c>27</c>
<c><spanx style="vbare">29&nbsp;&nbsp;47&nbsp;&nbsp;74&nbsp;&nbsp;99&nbsp;124&nbsp;151&nbsp;176&nbsp;198&nbsp;220&nbsp;237</spanx></c>
<c>28</c>
<c><spanx style="vbare">33&nbsp;&nbsp;42&nbsp;&nbsp;61&nbsp;&nbsp;76&nbsp;&nbsp;93&nbsp;121&nbsp;155&nbsp;174&nbsp;207&nbsp;225</spanx></c>
<c>29</c>
<c><spanx style="vbare">29&nbsp;&nbsp;53&nbsp;&nbsp;87&nbsp;112&nbsp;136&nbsp;154&nbsp;170&nbsp;188&nbsp;208&nbsp;227</spanx></c>
<c>30</c>
<c><spanx style="vbare">24&nbsp;&nbsp;30&nbsp;&nbsp;52&nbsp;&nbsp;84&nbsp;131&nbsp;150&nbsp;166&nbsp;186&nbsp;203&nbsp;229</spanx></c>
<c>31</c>
<c><spanx style="vbare">37&nbsp;&nbsp;48&nbsp;&nbsp;64&nbsp;&nbsp;84&nbsp;104&nbsp;118&nbsp;156&nbsp;177&nbsp;201&nbsp;230</spanx></c>
</texttable>

<texttable anchor="silk_nlsf_wb_codebook"
           title="Codebook Vectors for WB Normalized LSF Stage 1 Decoding">
<ttcol>I1</ttcol>
<ttcol>Codebook</ttcol>
<c/>
<c><spanx style="vbare">&nbsp;0&nbsp;&nbsp;1&nbsp;&nbsp;2&nbsp;&nbsp;3&nbsp;&nbsp;4&nbsp;&nbsp;&nbsp;5&nbsp;&nbsp;&nbsp;6&nbsp;&nbsp;&nbsp;7&nbsp;&nbsp;&nbsp;8&nbsp;&nbsp;&nbsp;9&nbsp;&nbsp;10&nbsp;&nbsp;11&nbsp;&nbsp;12&nbsp;&nbsp;13&nbsp;&nbsp;14&nbsp;&nbsp;15</spanx></c>
<c>0</c>
<c><spanx style="vbare">&nbsp;7&nbsp;23&nbsp;38&nbsp;54&nbsp;69&nbsp;&nbsp;85&nbsp;100&nbsp;116&nbsp;131&nbsp;147&nbsp;162&nbsp;178&nbsp;193&nbsp;208&nbsp;223&nbsp;239</spanx></c>
<c>1</c>
<c><spanx style="vbare">13&nbsp;25&nbsp;41&nbsp;55&nbsp;69&nbsp;&nbsp;83&nbsp;&nbsp;98&nbsp;112&nbsp;127&nbsp;142&nbsp;157&nbsp;171&nbsp;187&nbsp;203&nbsp;220&nbsp;236</spanx></c>
<c>2</c>
<c><spanx style="vbare">15&nbsp;21&nbsp;34&nbsp;51&nbsp;61&nbsp;&nbsp;78&nbsp;&nbsp;92&nbsp;106&nbsp;126&nbsp;136&nbsp;152&nbsp;167&nbsp;185&nbsp;205&nbsp;225&nbsp;240</spanx></c>
<c>3</c>
<c><spanx style="vbare">10&nbsp;21&nbsp;36&nbsp;50&nbsp;63&nbsp;&nbsp;79&nbsp;&nbsp;95&nbsp;110&nbsp;126&nbsp;141&nbsp;157&nbsp;173&nbsp;189&nbsp;205&nbsp;221&nbsp;237</spanx></c>
<c>4</c>
<c><spanx style="vbare">17&nbsp;20&nbsp;37&nbsp;51&nbsp;59&nbsp;&nbsp;78&nbsp;&nbsp;89&nbsp;107&nbsp;123&nbsp;134&nbsp;150&nbsp;164&nbsp;184&nbsp;205&nbsp;224&nbsp;240</spanx></c>
<c>5</c>
<c><spanx style="vbare">10&nbsp;15&nbsp;32&nbsp;51&nbsp;67&nbsp;&nbsp;81&nbsp;&nbsp;96&nbsp;112&nbsp;129&nbsp;142&nbsp;158&nbsp;173&nbsp;189&nbsp;204&nbsp;220&nbsp;236</spanx></c>
<c>6</c>
<c><spanx style="vbare">&nbsp;8&nbsp;21&nbsp;37&nbsp;51&nbsp;65&nbsp;&nbsp;79&nbsp;&nbsp;98&nbsp;113&nbsp;126&nbsp;138&nbsp;155&nbsp;168&nbsp;179&nbsp;192&nbsp;209&nbsp;218</spanx></c>
<c>7</c>
<c><spanx style="vbare">12&nbsp;15&nbsp;34&nbsp;55&nbsp;63&nbsp;&nbsp;78&nbsp;&nbsp;87&nbsp;108&nbsp;118&nbsp;131&nbsp;148&nbsp;167&nbsp;185&nbsp;203&nbsp;219&nbsp;236</spanx></c>
<c>8</c>
<c><spanx style="vbare">16&nbsp;19&nbsp;32&nbsp;36&nbsp;56&nbsp;&nbsp;79&nbsp;&nbsp;91&nbsp;108&nbsp;118&nbsp;136&nbsp;154&nbsp;171&nbsp;186&nbsp;204&nbsp;220&nbsp;237</spanx></c>
<c>9</c>
<c><spanx style="vbare">11&nbsp;28&nbsp;43&nbsp;58&nbsp;74&nbsp;&nbsp;89&nbsp;105&nbsp;120&nbsp;135&nbsp;150&nbsp;165&nbsp;180&nbsp;196&nbsp;211&nbsp;226&nbsp;241</spanx></c>
<c>10</c>
<c><spanx style="vbare">&nbsp;6&nbsp;16&nbsp;33&nbsp;46&nbsp;60&nbsp;&nbsp;75&nbsp;&nbsp;92&nbsp;107&nbsp;123&nbsp;137&nbsp;156&nbsp;169&nbsp;185&nbsp;199&nbsp;214&nbsp;225</spanx></c>
<c>11</c>
<c><spanx style="vbare">11&nbsp;19&nbsp;30&nbsp;44&nbsp;57&nbsp;&nbsp;74&nbsp;&nbsp;89&nbsp;105&nbsp;121&nbsp;135&nbsp;152&nbsp;169&nbsp;186&nbsp;202&nbsp;218&nbsp;234</spanx></c>
<c>12</c>
<c><spanx style="vbare">12&nbsp;19&nbsp;29&nbsp;46&nbsp;57&nbsp;&nbsp;71&nbsp;&nbsp;88&nbsp;100&nbsp;120&nbsp;132&nbsp;148&nbsp;165&nbsp;182&nbsp;199&nbsp;216&nbsp;233</spanx></c>
<c>13</c>
<c><spanx style="vbare">17&nbsp;23&nbsp;35&nbsp;46&nbsp;56&nbsp;&nbsp;77&nbsp;&nbsp;92&nbsp;106&nbsp;123&nbsp;134&nbsp;152&nbsp;167&nbsp;185&nbsp;204&nbsp;222&nbsp;237</spanx></c>
<c>14</c>
<c><spanx style="vbare">14&nbsp;17&nbsp;45&nbsp;53&nbsp;63&nbsp;&nbsp;75&nbsp;&nbsp;89&nbsp;107&nbsp;115&nbsp;132&nbsp;151&nbsp;171&nbsp;188&nbsp;206&nbsp;221&nbsp;240</spanx></c>
<c>15</c>
<c><spanx style="vbare">&nbsp;9&nbsp;16&nbsp;29&nbsp;40&nbsp;56&nbsp;&nbsp;71&nbsp;&nbsp;88&nbsp;103&nbsp;119&nbsp;137&nbsp;154&nbsp;171&nbsp;189&nbsp;205&nbsp;222&nbsp;237</spanx></c>
<c>16</c>
<c><spanx style="vbare">16&nbsp;19&nbsp;36&nbsp;48&nbsp;57&nbsp;&nbsp;76&nbsp;&nbsp;87&nbsp;105&nbsp;118&nbsp;132&nbsp;150&nbsp;167&nbsp;185&nbsp;202&nbsp;218&nbsp;236</spanx></c>
<c>17</c>
<c><spanx style="vbare">12&nbsp;17&nbsp;29&nbsp;54&nbsp;71&nbsp;&nbsp;81&nbsp;&nbsp;94&nbsp;104&nbsp;126&nbsp;136&nbsp;149&nbsp;164&nbsp;182&nbsp;201&nbsp;221&nbsp;237</spanx></c>
<c>18</c>
<c><spanx style="vbare">15&nbsp;28&nbsp;47&nbsp;62&nbsp;79&nbsp;&nbsp;97&nbsp;115&nbsp;129&nbsp;142&nbsp;155&nbsp;168&nbsp;180&nbsp;194&nbsp;208&nbsp;223&nbsp;238</spanx></c>
<c>19</c>
<c><spanx style="vbare">&nbsp;8&nbsp;14&nbsp;30&nbsp;45&nbsp;62&nbsp;&nbsp;78&nbsp;&nbsp;94&nbsp;111&nbsp;127&nbsp;143&nbsp;159&nbsp;175&nbsp;192&nbsp;207&nbsp;223&nbsp;239</spanx></c>
<c>20</c>
<c><spanx style="vbare">17&nbsp;30&nbsp;49&nbsp;62&nbsp;79&nbsp;&nbsp;92&nbsp;107&nbsp;119&nbsp;132&nbsp;145&nbsp;160&nbsp;174&nbsp;190&nbsp;204&nbsp;220&nbsp;235</spanx></c>
<c>21</c>
<c><spanx style="vbare">14&nbsp;19&nbsp;36&nbsp;45&nbsp;61&nbsp;&nbsp;76&nbsp;&nbsp;91&nbsp;108&nbsp;121&nbsp;138&nbsp;154&nbsp;172&nbsp;189&nbsp;205&nbsp;222&nbsp;238</spanx></c>
<c>22</c>
<c><spanx style="vbare">12&nbsp;18&nbsp;31&nbsp;45&nbsp;60&nbsp;&nbsp;76&nbsp;&nbsp;91&nbsp;107&nbsp;123&nbsp;138&nbsp;154&nbsp;171&nbsp;187&nbsp;204&nbsp;221&nbsp;236</spanx></c>
<c>23</c>
<c><spanx style="vbare">13&nbsp;17&nbsp;31&nbsp;43&nbsp;53&nbsp;&nbsp;70&nbsp;&nbsp;83&nbsp;103&nbsp;114&nbsp;131&nbsp;149&nbsp;167&nbsp;185&nbsp;203&nbsp;220&nbsp;237</spanx></c>
<c>24</c>
<c><spanx style="vbare">17&nbsp;22&nbsp;35&nbsp;42&nbsp;58&nbsp;&nbsp;78&nbsp;&nbsp;93&nbsp;110&nbsp;125&nbsp;139&nbsp;155&nbsp;170&nbsp;188&nbsp;206&nbsp;224&nbsp;240</spanx></c>
<c>25</c>
<c><spanx style="vbare">&nbsp;8&nbsp;15&nbsp;34&nbsp;50&nbsp;67&nbsp;&nbsp;83&nbsp;&nbsp;99&nbsp;115&nbsp;131&nbsp;146&nbsp;162&nbsp;178&nbsp;193&nbsp;209&nbsp;224&nbsp;239</spanx></c>
<c>26</c>
<c><spanx style="vbare">13&nbsp;16&nbsp;41&nbsp;66&nbsp;73&nbsp;&nbsp;86&nbsp;&nbsp;95&nbsp;111&nbsp;128&nbsp;137&nbsp;150&nbsp;163&nbsp;183&nbsp;206&nbsp;225&nbsp;241</spanx></c>
<c>27</c>
<c><spanx style="vbare">17&nbsp;25&nbsp;37&nbsp;52&nbsp;63&nbsp;&nbsp;75&nbsp;&nbsp;92&nbsp;102&nbsp;119&nbsp;132&nbsp;144&nbsp;160&nbsp;175&nbsp;191&nbsp;212&nbsp;231</spanx></c>
<c>28</c>
<c><spanx style="vbare">19&nbsp;31&nbsp;49&nbsp;65&nbsp;83&nbsp;100&nbsp;117&nbsp;133&nbsp;147&nbsp;161&nbsp;174&nbsp;187&nbsp;200&nbsp;213&nbsp;227&nbsp;242</spanx></c>
<c>29</c>
<c><spanx style="vbare">18&nbsp;31&nbsp;52&nbsp;68&nbsp;88&nbsp;103&nbsp;117&nbsp;126&nbsp;138&nbsp;149&nbsp;163&nbsp;177&nbsp;192&nbsp;207&nbsp;223&nbsp;239</spanx></c>
<c>30</c>
<c><spanx style="vbare">16&nbsp;29&nbsp;47&nbsp;61&nbsp;76&nbsp;&nbsp;90&nbsp;106&nbsp;119&nbsp;133&nbsp;147&nbsp;161&nbsp;176&nbsp;193&nbsp;209&nbsp;224&nbsp;240</spanx></c>
<c>31</c>
<c><spanx style="vbare">15&nbsp;21&nbsp;35&nbsp;50&nbsp;61&nbsp;&nbsp;73&nbsp;&nbsp;86&nbsp;&nbsp;97&nbsp;110&nbsp;119&nbsp;129&nbsp;141&nbsp;175&nbsp;198&nbsp;218&nbsp;237</spanx></c>
</texttable>

<t>
Given the stage-1 codebook entry cb1_Q8[], the stage-2 residual res_Q10[], and
 their corresponding weights, w_Q9[], the reconstructed normalized LSF
 coefficients are
<figure align="center">
<artwork align="center"><![CDATA[
NLSF_Q15[k] = (cb1_Q8[k]<<7) + (res_Q10[k]<<14)/w_Q9[k] ,
]]></artwork>
</figure>
 where the division is exact integer division.
However, nothing thus far in the reconstruction process, nor in the
 quantization process in the encoder, guarantees that the coefficients are
 monotonically increasing and separated well enough to ensure a stable filter.
When using the reference encoder, roughly 2% of frames violate this constraint.
The next section describes a stabilization procedure used to make these
 guarantees.
</t>

<section anchor="silk_nlsf_stabilization" title="Normalized LSF Stabilization">
<t>
The normalized LSF stabilization procedure is implemented in
 silk_NLSF_stabilize() (silk_NLSF_stabilize.c).
This process ensures that consecutive values of the normalized LSF
 coefficients, NLSF_Q15[], are spaced some minimum distance apart
 (predetermined to be the 0.01 percentile of a large training set).
<xref target="silk_nlsf_min_spacing"/> gives the minimum spacings for NB and MB
 and those for WB, where row k is the minimum allowed value of
 NLSF_Q[k]-NLSF_Q[k-1].
For the purposes of computing this spacing for the first and last coefficient,
 NLSF_Q15[-1] is taken to be 0, and NLSF_Q15[d_LPC] is taken to be 32768.
</t>

<texttable anchor="silk_nlsf_min_spacing"
           title="Minimum Spacing for Normalized LSF Coefficients">
<ttcol>Coefficient</ttcol>
<ttcol align="right">NB and MB</ttcol>
<ttcol align="right">WB</ttcol>
 <c>0</c> <c>250</c> <c>100</c>
 <c>1</c>   <c>3</c>   <c>3</c>
 <c>2</c>   <c>6</c>  <c>40</c>
 <c>3</c>   <c>3</c>   <c>3</c>
 <c>4</c>   <c>3</c>   <c>3</c>
 <c>5</c>   <c>3</c>   <c>3</c>
 <c>6</c>   <c>4</c>   <c>5</c>
 <c>7</c>   <c>3</c>  <c>14</c>
 <c>8</c>   <c>3</c>  <c>14</c>
 <c>9</c>   <c>3</c>  <c>10</c>
<c>10</c> <c>461</c>  <c>11</c>
<c>11</c>       <c/>   <c>3</c>
<c>12</c>       <c/>   <c>8</c>
<c>13</c>       <c/>   <c>9</c>
<c>14</c>       <c/>   <c>7</c>
<c>15</c>       <c/>   <c>3</c>
<c>16</c>       <c/> <c>347</c>
</texttable>

<t>
The procedure starts off by trying to make small adjustments which attempt to
 minimize the amount of distortion introduced.
After 20 such adjustments, it falls back to a more direct method which
 guarantees the constraints are enforced but may require large adjustments.
</t>
<t>
Let NDeltaMin_Q15[k] be the minimum required spacing for the current audio
 bandwidth from <xref target="silk_nlsf_min_spacing"/>.
First, the procedure finds the index i where
 NLSF_Q15[i]&nbsp;-&nbsp;NLSF_Q15[i-1]&nbsp;-&nbsp;NDeltaMin_Q15[i] is the
 smallest, breaking ties by using the lower value of i.
If this value is non-negative, then the stabilization stops; the coefficients
 satisfy all the constraints.
Otherwise, if i&nbsp;==&nbsp;0, it sets NLSF_Q15[0] to NDeltaMin_Q15[0], and if
 i&nbsp;==&nbsp;d_LPC, it sets NLSF_Q15[d_LPC-1] to
 (32768&nbsp;-&nbsp;NDeltaMin_Q15[d_LPC]).
For all other values of i, both NLSF_Q15[i-1] and NLSF_Q15[i] are updated as
 follows:
<figure align="center">
<artwork align="center"><![CDATA[
                                      i-1
                                      __
 min_center_Q15 = (NDeltaMin[i]>>1) + \  NDeltaMin[k]
                                      /_
                                      k=0
                                             d_LPC
                                              __
 max_center_Q15 = 32768 - (NDeltaMin[i]>>1) - \  NDeltaMin[k]
                                              /_
                                             k=i+1
center_freq_Q15 = clamp(min_center_Q15[i],
                        (NLSF_Q15[i-1] + NLSF_Q15[i] + 1)>>1,
                        max_center_Q15[i])

 NLSF_Q15[i-1] = center_freq_Q15 - (NDeltaMin_Q15[i]>>1)

   NLSF_Q15[i] = NLSF_Q15[i-1] + NDeltaMin_Q15[i] .
]]></artwork>
</figure>
Then the procedure repeats again, until it has executed 20 times, or until
 it stops because the coefficients satisfy all the constraints.
</t>
<t>
After the 20th repetition of the above, the following fallback procedure
 executes once.
First, the values of NLSF_Q15[k] for 0&nbsp;&lt;=&nbsp;k&nbsp;&lt;&nbsp;d_LPC
 are sorted in ascending order.
Then for each value of k from 0 to d_LPC-1, NLSF_Q15[k] is set to
<figure align="center">
<artwork align="center"><![CDATA[
max(NLSF_Q15[k], NLSF_Q15[k-1] + NDeltaMin_Q15[k]) .
]]></artwork>
</figure>
Next, for each value of k from d_LPC-1 down to 0, NLSF_Q15[k] is set to
<figure align="center">
<artwork align="center"><![CDATA[
min(NLSF_Q15[k], NLSF_Q15[k+1] - NDeltaMin_Q15[k+1]) .
]]></artwork>
</figure>
</t>

</section>

<section anchor="silk_nlsf_interpolation" title="Normalized LSF Interpolation">
<t>
For 20&nbsp;ms SILK frames, the first half of the frame (i.e., the first two
 sub-frames) may use normalized LSF coefficients that are interpolated between
 the decoded LSFs for the previous frame and the current frame.
A Q2 interpolation factor follows the LSF coefficient indices in the bitstream,
 which is decoded using the PDF in <xref target="silk_nlsf_interp_pdf"/>.
This happens in silk_decode_indices() (silk_decode_indices.c).
For the first frame after a decoder reset, when no prior LSF coefficients are
 available, the decoder still decodes this factor, but ignores its value and
 always uses 4 instead.
For 10&nbsp;ms SILK frames, this factor is not stored at all.
</t>

<texttable anchor="silk_nlsf_interp_pdf"
           title="PDF for Normalized LSF Interpolation Index">
<ttcol>PDF</ttcol>
<c>{13, 22, 29, 11, 181}/256</c>
</texttable>

<t>
Let n2_Q15[k] be the normalized LSF coefficients decoded by the procedure in
 <xref target="silk_nlsfs"/>, n0_Q15[k] be the LSF coefficients
 decoded for the prior frame, and w_Q2 be the interpolation factor.
Then the normalized LSF coefficients used for the first half of a 20&nbsp;ms
 frame, n1_Q15[k], are
<figure align="center">
<artwork align="center"><![CDATA[
n1_Q15[k] = n0_Q15[k] + (w_Q2*(n2_Q15[k] - n0_Q15[k]) >> 2) .
]]></artwork>
</figure>
This interpolation is performed in silk_decode_parameters()
 (silk_decode_parameters.c).
</t>
</section>

<section anchor="silk_nlsf2lpc"
         title="Converting Normalized LSF Coefficients to LPCs">
<t>
Any LPC filter A(z) can be split into a symmetric part P(z) and an
 anti-symmetric part Q(z) such that
<figure align="center">
<artwork align="center"><![CDATA[
          d_LPC
           __         -k   1
A(z) = 1 - \  a[k] * z   = - * (P(z) + Q(z))
           /_              2
           k=1
]]></artwork>
</figure>
with
<figure align="center">
<artwork align="center"><![CDATA[
               -d_LPC-1      -1
P(z) = A(z) + z         * A(z  )

               -d_LPC-1      -1
Q(z) = A(z) - z         * A(z  ) .
]]></artwork>
</figure>
The even normalized LSF coefficients correspond to a pair of conjugate roots of
 P(z), while the odd coefficients correspond to a pair of conjugate roots of
 Q(z), all of which lie on the unit circle.
In addition, P(z) has a root at pi and Q(z) has a root at 0.
Thus, they may be reconstructed mathematically from a set of normalized LSF
 coefficients, n[k], as
<figure align="center">
<artwork align="center"><![CDATA[
                 d_LPC/2-1
             -1     ___                        -1    -2
P(z) = (1 + z  ) *  | |  (1 - 2*cos(pi*n[2*k])*z  + z  )
                    k=0

                 d_LPC/2-1
             -1     ___                          -1    -2
Q(z) = (1 - z  ) *  | |  (1 - 2*cos(pi*n[2*k+1])*z  + z  )
                    k=0
]]></artwork>
</figure>
</t>
<t>
However, SILK performs this reconstruction using a fixed-point approximation so
 that all decoders can reproduce it in a bit-exact manner to avoid prediction
 drift.
The function silk_NLSF2A() (silk_NLSF2A.c) implements this procedure.
</t>
<t>
To start, it approximates cos(pi*n[k]) using a table lookup with linear
 interpolation.
The encoder SHOULD use the inverse of this piecewise linear approximation,
 rather than true the inverse of the cosine function, when deriving the
 normalized LSF coefficients.
</t>
<t>
The top 7 bits of each normalized LSF coefficient index a value in the table,
 and the next 8 bits interpolate between it and the next value.
Let i&nbsp;=&nbsp;n[k]&gt;&gt;8 be the integer index and
 f&nbsp;=&nbsp;n[k]&amp;255 be the fractional part of a given coefficient.
Then the approximated cosine, c_Q17[k], is
<figure align="center">
<artwork align="center"><![CDATA[
c_Q17[k] = (cos_Q13[i]*256 + (cos_Q13[i+1]-cos_Q13[i])*f + 8) >> 4 ,
]]></artwork>
</figure>
 where cos_Q13[i] is the corresponding entry of
 <xref target="silk_cos_table"/>.
</t>

<texttable anchor="silk_cos_table"
           title="Q13 Cosine Table for LSF Conversion">
<ttcol align="right"></ttcol>
<ttcol align="right">0</ttcol>
<ttcol align="right">1</ttcol>
<ttcol align="right">2</ttcol>
<ttcol align="right">3</ttcol>
<c>0</c>
 <c>8192</c> <c>8190</c> <c>8182</c> <c>8170</c>
<c>4</c>
 <c>8152</c> <c>8130</c> <c>8104</c> <c>8072</c>
<c>8</c>
 <c>8034</c> <c>7994</c> <c>7946</c> <c>7896</c>
<c>12</c>
 <c>7840</c> <c>7778</c> <c>7714</c> <c>7644</c>
<c>16</c>
 <c>7568</c> <c>7490</c> <c>7406</c> <c>7318</c>
<c>20</c>
 <c>7226</c> <c>7128</c> <c>7026</c> <c>6922</c>
<c>24</c>
 <c>6812</c> <c>6698</c> <c>6580</c> <c>6458</c>
<c>28</c>
 <c>6332</c> <c>6204</c> <c>6070</c> <c>5934</c>
<c>32</c>
 <c>5792</c> <c>5648</c> <c>5502</c> <c>5352</c>
<c>36</c>
 <c>5198</c> <c>5040</c> <c>4880</c> <c>4718</c>
<c>40</c>
 <c>4552</c> <c>4382</c> <c>4212</c> <c>4038</c>
<c>44</c>
 <c>3862</c> <c>3684</c> <c>3502</c> <c>3320</c>
<c>48</c>
 <c>3136</c> <c>2948</c> <c>2760</c> <c>2570</c>
<c>52</c>
 <c>2378</c> <c>2186</c> <c>1990</c> <c>1794</c>
<c>56</c>
 <c>1598</c> <c>1400</c> <c>1202</c> <c>1002</c>
<c>60</c>
  <c>802</c>  <c>602</c>  <c>402</c>  <c>202</c>
<c>64</c>
    <c>0</c> <c>-202</c> <c>-402</c> <c>-602</c>
<c>68</c>
 <c>-802</c><c>-1002</c><c>-1202</c><c>-1400</c>
<c>72</c>
<c>-1598</c><c>-1794</c><c>-1990</c><c>-2186</c>
<c>76</c>
<c>-2378</c><c>-2570</c><c>-2760</c><c>-2948</c>
<c>80</c>
<c>-3136</c><c>-3320</c><c>-3502</c><c>-3684</c>
<c>84</c>
<c>-3862</c><c>-4038</c><c>-4212</c><c>-4382</c>
<c>88</c>
<c>-4552</c><c>-4718</c><c>-4880</c><c>-5040</c>
<c>92</c>
<c>-5198</c><c>-5352</c><c>-5502</c><c>-5648</c>
<c>96</c>
<c>-5792</c><c>-5934</c><c>-6070</c><c>-6204</c>
<c>100</c>
<c>-6332</c><c>-6458</c><c>-6580</c><c>-6698</c>
<c>104</c>
<c>-6812</c><c>-6922</c><c>-7026</c><c>-7128</c>
<c>108</c>
<c>-7226</c><c>-7318</c><c>-7406</c><c>-7490</c>
<c>112</c>
<c>-7568</c><c>-7644</c><c>-7714</c><c>-7778</c>
<c>116</c>
<c>-7840</c><c>-7896</c><c>-7946</c><c>-7994</c>
<c>120</c>
<c>-8034</c><c>-8072</c><c>-8104</c><c>-8130</c>
<c>124</c>
<c>-8152</c><c>-8170</c><c>-8182</c><c>-8190</c>
<c>128</c>
<c>-8192</c>        <c/>        <c/>        <c/>
</texttable>

<t>
Given the list of cosine values, silk_NLSF2A_find_poly() (silk_NLSF2A.c)
 computes the coefficients of P and Q, described here via a simple recurrence.
Let p_Q16[k][j] and q_Q16[k][j] be the coefficients of the products of the
 first (k+1) root pairs for P and Q, with j indexing the coefficient number.
Only the first (k+2) coefficients are needed, as the products are symmetric.
Let p_Q16[0][0]&nbsp;=&nbsp;q_Q16[0][0]&nbsp;=&nbsp;1&lt;&lt;16,
 p_Q16[0][1]&nbsp;=&nbsp;-c_Q17[0], q_Q16[0][1]&nbsp;=&nbsp;-c_Q17[1], and
 d2&nbsp;=&nbsp;d_LPC/2.
As boundary conditions, assume
 p_Q16[k][j]&nbsp;=&nbsp;q_Q16[k][j]&nbsp;=&nbsp;0 for all
 j&nbsp;&lt;&nbsp;0.
Also, assume p_Q16[k][k+2]&nbsp;=&nbsp;p_Q16[k][k] and
 q_Q16[k][k+2]&nbsp;=&nbsp;q_Q16[k][k] (because of the symmetry).
Then, for 0&nbsp;&lt;k&nbsp;&lt;&nbsp;d2 and 0&nbsp;&lt;=&nbsp;j&nbsp;&lt;=&nbsp;k+1,
<figure align="center">
<artwork align="center"><![CDATA[
p_Q16[k][j] = p_Q16[k-1][j] + p_Q16[k-1][j-2]
              - ((c_Q17[2*k]*p_Q16[k-1][j-1] + 32768)>>16) ,

q_Q16[k][j] = q_Q16[k-1][j] + q_Q16[k-1][j-2]
              - ((c_Q17[2*k+1]*q_Q16[k-1][j-1] + 32768)>>16) .
]]></artwork>
</figure>
The use of Q17 values for the cosine terms in an otherwise Q16 expression
 implicitly scales them by a factor of 2.
The multiplications in this recurrence may require up to 48 bits of precision
 in the result to avoid overflow.
In practice, each row of the recurrence only depends on the previous row, so an
 implementation does not need to store all of them.
</t>
<t>
silk_NLSF2A() uses the values from the last row of this recurrence to
 reconstruct a 32-bit version of the LPC filter (without the leading 1.0
 coefficient), a32_Q17[k], 0&nbsp;&lt;=&nbsp;k&nbsp;&lt;&nbsp;d2:
<figure align="center">
<artwork align="center"><![CDATA[
a32_Q17[k]         = -(q_Q16[d2-1][k+1] - q_Q16[d2-1][k])
                     - (p_Q16[d2-1][k+1] + p_Q16[d2-1][k])) ,

a32_Q17[d_LPC-k-1] =  (q_Q16[d2-1][k+1] - q_Q16[d2-1][k])
                     - (p_Q16[d2-1][k+1] + p_Q16[d2-1][k])) .
]]></artwork>
</figure>
The sum and difference of two terms from each of the p_Q16 and q_Q16
 coefficient lists reflect the (1&nbsp;+&nbsp;z**-1) and
 (1&nbsp;-&nbsp;z**-1) factors of P and Q, respectively.
The promotion of the expression from Q16 to Q17 implicitly scales the result
 by 1/2.
</t>
</section>

<section anchor="silk_lpc_range"
 title="Limiting the Range of the LPC Coefficients">
<t>
The a32_Q17[] coefficients are too large to fit in a 16-bit value, which
 significantly increases the cost of applying this filter in fixed-point
 decoders.
Reducing them to Q12 precision doesn't incur any significant quality loss,
 but still does not guarantee they will fit.
silk_NLSF2A() applies up to 10 rounds of bandwidth expansion to limit
 the dynamic range of these coefficients.
Even floating-point decoders SHOULD perform these steps, to avoid mismatch.
</t>
<t>
For each round, the process first finds the index k such that abs(a32_Q17[k])
 is the largest, breaking ties by using the lower value of k.
Then, it computes the corresponding Q12 precision value, maxabs_Q12, subject to
 an upper bound to avoid overflow in subsequent computations:
<figure align="center">
<artwork align="center"><![CDATA[
maxabs_Q12 = min((maxabs_Q17 + 16) >> 5, 163838) .
]]></artwork>
</figure>
If this is larger than 32767, the procedure derives the chirp factor,
 sc_Q16[0], to use in the bandwidth expansion as
<figure align="center">
<artwork align="center"><![CDATA[
                    (maxabs_Q12 - 32767) << 14
sc_Q16[0] = 65470 - -------------------------- ,
                    (maxabs_Q12 * (k+1)) >> 2
]]></artwork>
</figure>
 where the division here is exact integer division.
This is an approximation of the chirp factor needed to reduce the target
 coefficient to 32767, though it is both less than 0.999 and, for
 k&nbsp;&gt;&nbsp;0 when maxabs_Q12 is much greater than 32767, still slightly
 too large.
</t>
<t>
silk_bwexpander_32() (silk_bwexpander_32.c) peforms the bandwidth expansion
 (again, only when maxabs_Q12 is greater than 32767) using the following
 recurrence:
<figure align="center">
<artwork align="center"><![CDATA[
 a32_Q17[k] = (a32_Q17[k]*sc_Q16[k]) >> 16

sc_Q16[k+1] = (sc_Q16[0]*sc_Q16[k] + 32768) >> 16
]]></artwork>
</figure>
The first multiply may require up to 48 bits of precision in the result to
 avoid overflow.
The second multiply must be unsigned to avoid overflow with only 32 bits of
 precision.
The reference implementation uses a slightly more complex formulation that
 avoids the 32-bit overflow using signed multiplication, but is otherwise
 equivalent.
</t>
<t>
After 10 rounds of bandwidth expansion are performed, they are simply saturated
 to 16 bits:
<figure align="center">
<artwork align="center"><![CDATA[
a32_Q17[k] = clamp(-32768, (a32_Q17[k]+16) >> 5, 32767) << 5 .
]]></artwork>
</figure>
Because this performs the actual saturation in the Q12 domain, but converts the
 coefficients back to the Q17 domain for the purposes of prediction gain
 limiting, this step must be performed after the 10th round of bandwidth
 expansion, regardless of whether or not the Q12 version of any of the
 coefficients still overflow a 16-bit integer.
This saturation is not performed if maxabs_Q12 drops to 32767 or less prior to
 the 10th round.
</t>
</section>

<section title="Limiting the Prediction Gain of the LPC Filter">
<t>
Even if the Q12 coefficients would fit, the resulting filter may still have a
 significant gain (especially for voiced sounds), making the filter unstable.
silk_NLSF2A() applies up to 18 additional rounds of bandwidth expansion to
 limit the prediction gain.
Instead of controlling the amount of bandwidth expansion using the prediction
 gain itself (which may diverge to infinity for an unstable filter),
 silk_NLSF2A() uses LPC_inverse_pred_gain_QA() (silk_LPC_inv_pred_gain.c)
 to compute the reflection coefficients associated with the filter.
The filter is stable if and only if the magnitude of these coefficients is
 sufficiently less than one.
The reflection coefficients, rc[k], can be computed using a simple Levinson
 recurrence, initialized with the LPC coefficients
 a[d_LPC-1][n]&nbsp;=&nbsp;a[n], and then updated via
<figure align="center">
<artwork align="center"><![CDATA[
    rc[k] = -a[k][k] ,

            a[k][n] - a[k][k-n-1]*rc[k]
a[k-1][n] = --------------------------- .
                             2
                    1 - rc[k]
]]></artwork>
</figure>
</t>
<t>
However, LPC_inverse_pred_gain_QA() approximates this using fixed-point
 arithmetic to guarantee reproducible results across platforms and
 implementations.
It is important to run on the real Q12 coefficients that will be used during
 reconstruction, because small changes in the coefficients can make a stable
 filter unstable, but increasing the precision back to Q16 allows more accurate
 computation of the reflection coefficients.
Thus, let
<figure align="center">
<artwork align="center"><![CDATA[
a32_Q16[d_LPC-1][n] = ((a32_Q17[n] + 16) >> 5) << 4
]]></artwork>
</figure>
 be the Q16 representation of the Q12 version of the LPC coefficients that will
 eventually be used.
Then for each k from d_LPC-1 down to 0, if
 abs(a32_Q16[k][k])&nbsp;&gt;&nbsp;65520, the filter is unstable and the
 recurrence stops.
Otherwise, the row k-1 of a32_Q16 is computed from row k as
<figure align="center">
<artwork align="center"><![CDATA[
      rc_Q31[k] = -a32_Q16[k][k] << 15 ,

     div_Q30[k] = (1<<30) - 1 - (rc_Q31[k]*rc_Q31[k] >> 32) ,

          b1[k] = ilog(div_Q30[k]) - 16 ,

                        (1<<29) - 1
     inv_Qb1[k] = ----------------------- ,
                  div_Q30[k] >> (b1[k]+1)

     err_Q29[k] = (1<<29)
                  - ((div_Q30[k]<<(15-b1[k]))*inv_Qb1[k] >> 16) ,

     mul_Q16[k] = ((inv_Qb1[k] << 16)
                   + (err_Q29[k]*inv_Qb1[k] >> 13)) >> b1[k] ,

          b2[k] = ilog(mul_Q16[k]) - 15 ,

  t_Q16[k-1][n] = a32_Q16[k][n]
                  - ((a32_Q16[k][k-n-1]*rc_Q31[k] >> 32) << 1) ,

a32_Q16[k-1][n] = ((t_Q16[k-1][n] *
                    (mul_Q16[k] << (16-b2[k]))) >> 32) << b2[k] .
]]></artwork>
</figure>
Here, rc_Q30[k] are the reflection coefficients.
div_Q30[k] is the denominator for each iteration, and mul_Q16[k] is its
 multiplicative inverse.
inv_Qb1[k], which ranges from 16384 to 32767, is a low-precision version of
 that inverse (with b1[k] fractional bits, where b1[k] ranges from 3 to 14).
err_Q29[k] is the residual error, ranging from -32392 to 32763, which is used
 to improve the accuracy.
t_Q16[k-1][n], 0&nbsp;&lt;=&nbsp;n&nbsp;&lt;&nbsp;k, are the numerators for the
 next row of coefficients in the recursion, and a32_Q16[k-1][n] is the final
 version of that row.
Every multiply in this procedure except the one used to compute mul_Q16[k]
 requires more than 32 bits of precision, but otherwise all intermediate
 results fit in 32 bits or less.
In practice, because each row only depends on the next one, an implementation
 does not need to store them all.
If abs(a32_Q16[k][k])&nbsp;&lt;=&nbsp;65520 for
 0&nbsp;&lt;=&nbsp;k&nbsp;&lt;&nbsp;d_LPC, then the filter is considerd stable.
</t>
<t>
On round i, 1&nbsp;&lt;=&nbsp;i&nbsp;&lt;=&nbsp;18, if the filter passes this
 stability check, then this procedure stops, and the final LPC coefficients to
 use for reconstruction<!--TODO: In section...--> are
<figure align="center">
<artwork align="center"><![CDATA[
a_Q12[k] = (a32_Q17[k] + 16) >> 5 .
]]></artwork>
</figure>
Otherwise, a round of bandwidth expansion is applied using the same procedure
 as in <xref target="silk_lpc_range"/>, with
<figure align="center">
<artwork align="center"><![CDATA[
sc_Q16[0] = 65536 - i*(i+9) .
]]></artwork>
</figure>
If, after the 18th round, the filter still fails the stability check, then
 a_Q12[k] is set to 0 for all k.
</t>
</section>

</section>

<section title="Long-Term Prediction (LTP) Parameters">
<t>
After the normalized LSF indices and, for 20&nbsp;ms frames, the LSF
 interpolation index, voiced frames (see <xref target="silk_frame_type"/>)
 include additional Long-Term Prediction (LTP) parameters.
There is one primary lag index for each SILK frame, but this is refined to
 produce a separate lag index per subframe using a vector quantizer.
Each subframe also gets its own prediction gain coefficient.
</t>

<section title="Pitch Lags">
<t>
The primary lag index is coded either relative to the primary lag of the prior
 frame or as an absolute index.
Like the quantization gains, the first LBRR frame, an LBRR frame where the
 previous LBRR frame was not coded, or the first regular SILK frame in an Opus
 frame all code the pitch lag as an absolute index.
When the prior frame was not voiced, this also forces absolute coding.
</t>
<t>
With absolute coding, the primary pitch lag may range from 2&nbsp;ms
 (inclusive) up to 18&nbsp;ms (exclusive), corresponding to pitches from
 500&nbsp;Hz down to 55.6&nbsp;Hz, respectively.
It is comprised of a high part and a low part, where the decoder reads the high
 part using the 32-entry codebook in <xref target="silk_abs_pitch_high_pdf"/>
 and the low part using the codebook corresponding to the current audio
 bandwidth from <xref target="silk_abs_pitch_low_pdf"/>.
The final primary pitch lag is then
<figure align="center">
<artwork align="center"><![CDATA[
lag = lag_high*lag_scale + lag_low + lag_min
]]></artwork>
</figure>
 where lag_high is the high part, lag_low is the low part, and lag_scale
 and lag_min are the values from the "Scale" and "Minimum Lag" columns of
 <xref target="silk_abs_pitch_low_pdf"/>, respectively.
</t>

<texttable anchor="silk_abs_pitch_high_pdf"
 title="PDF for High Part of Primary Pitch Lag">
<ttcol align="left">PDF</ttcol>
<c>{3,   3,   6,  11,  21,  30,  32,  19,
   11,  10,  12,  13,  13,  12,  11,   9,
    8,   7,   6,   4,   2,   2,   2,   1,
    1,   1,   1,   1,   1,   1,   1,   1}/256</c>
</texttable>

<texttable anchor="silk_abs_pitch_low_pdf"
 title="PDF for Low Part of Primary Pitch Lag">
<ttcol>Audio Bandwidth</ttcol>
<ttcol>PDF</ttcol>
<ttcol>Scale</ttcol>
<ttcol>Minimum Lag</ttcol>
<ttcol>Maximum Lag</ttcol>
<c>NB</c> <c>{64, 64, 64, 64}/256</c>                 <c>4</c> <c>16</c> <c>144</c>
<c>MB</c> <c>{43, 42, 43, 43, 42, 43}/256</c>         <c>6</c> <c>24</c> <c>216</c>
<c>WB</c> <c>{32, 32, 32, 32, 32, 32, 32, 32}/256</c> <c>8</c> <c>32</c> <c>288</c>
</texttable>

<t>
All frames that do not use absolute coding for the primary lag index use
 relative coding instead.
The decoder reads a single delta value using the 21-entry PDF in
 <xref target="silk_rel_pitch_pdf"/>.
If the resulting value is zero, it falls back to the absolute coding procedure
 from the prior paragraph.
Otherwise, the final primary pitch lag is then
<figure align="center">
<artwork align="center"><![CDATA[
lag = lag_prev + (delta_lag_index - 9)
]]></artwork>
</figure>
 where lag_prev is the primary pitch lag from the previous frame and
 delta_lag_index is the value just decoded.
This allows a per-frame change in the pitch lag of -8 to +11 samples.
The decoder does no clamping at this point, so this value can fall outside the
 range of 2&nbsp;ms to 18&nbsp;ms, and the decoder must use this unclamped
 value when using relative coding in the next SILK frame (if any).
However, because an Opus frame can use relative coding for at most two
 consecutive SILK frames, integer overflow should not be an issue.
</t>

<texttable anchor="silk_rel_pitch_pdf"
 title="PDF for Pitch Lag Change">
<ttcol align="left">PDF</ttcol>
<c>{46,  2,  2,  3,  4,  6, 10, 15,
    26, 38, 30, 22, 15, 10,  7,  6,
     4,  4,  2,  2,  2}/256</c>
</texttable>

<t>
After the primary pitch lag, a "pitch contour", stored as a single entry from
 one of four small VQ codebooks, gives lag offsets for each subframe in the
 current SILK frame.
The codebook index is decoded using one of the PDFs in
 <xref target="silk_pitch_contour_pdfs"/> depending on the current frame size
 and audio bandwidth.
<xref target="silk_pitch_contour_cb_nb10ms"/> through
 <xref target="silk_pitch_contour_cb_mbwb20ms"/> give the corresponding offsets
 to apply to the primary pitch lag for each subframe given the decoded codebook
 index.
</t>

<texttable anchor="silk_pitch_contour_pdfs"
 title="PDFs for Subframe Pitch Contour">
<ttcol>Audio Bandwidth</ttcol>
<ttcol>SILK Frame Size</ttcol>
<ttcol>PDF</ttcol>
<c>NB</c>       <c>10&nbsp;ms</c>
<c>{143, 50, 63}/256</c>
<c>NB</c>       <c>20&nbsp;ms</c>
<c>{68, 12, 21, 17, 19, 22, 30, 24,
    17, 16, 10}/256</c>
<c>MB or WB</c> <c>10&nbsp;ms</c>
<c>{91, 46, 39, 19, 14, 12,  8,  7,
     6,  5,  5,  4}/256</c>
<c>MB or WB</c> <c>20&nbsp;ms</c>
<c>{33, 22, 18, 16, 15, 14, 14, 13,
    13, 10,  9,  9,  8,  6,  6,  6,
     5,  4,  4,  4,  3,  3,  3,  2,
     2,  2,  2,  2,  2,  2,  1,  1,
     1,  1}</c>
</texttable>

<texttable anchor="silk_pitch_contour_cb_nb10ms"
 title="Codebook Vectors for Subframe Pitch Contour: NB, 10&nbsp;ms Frames">
<ttcol>Index</ttcol>
<ttcol align="right">Subframe Offsets</ttcol>
<c>0</c> <c><spanx style="vbare">&nbsp;0,&nbsp;&nbsp;0</spanx></c>
<c>1</c> <c><spanx style="vbare">&nbsp;1,&nbsp;&nbsp;0</spanx></c>
<c>2</c> <c><spanx style="vbare">&nbsp;0,&nbsp;&nbsp;1</spanx></c>
</texttable>

<texttable anchor="silk_pitch_contour_cb_nb20ms"
 title="Codebook Vectors for Subframe Pitch Contour: NB, 20&nbsp;ms Frames">
<ttcol>Index</ttcol>
<ttcol align="right">Subframe Offsets</ttcol>
 <c>0</c> <c><spanx style="vbare">&nbsp;0,&nbsp;&nbsp;0,&nbsp;&nbsp;0,&nbsp;&nbsp;0</spanx></c>
 <c>1</c> <c><spanx style="vbare">&nbsp;2,&nbsp;&nbsp;1,&nbsp;&nbsp;0,&nbsp;-1</spanx></c>
 <c>2</c> <c><spanx style="vbare">-1,&nbsp;&nbsp;0,&nbsp;&nbsp;1,&nbsp;&nbsp;2</spanx></c>
 <c>3</c> <c><spanx style="vbare">-1,&nbsp;&nbsp;0,&nbsp;&nbsp;0,&nbsp;&nbsp;1</spanx></c>
 <c>4</c> <c><spanx style="vbare">-1,&nbsp;&nbsp;0,&nbsp;&nbsp;0,&nbsp;&nbsp;0</spanx></c>
 <c>5</c> <c><spanx style="vbare">&nbsp;0,&nbsp;&nbsp;0,&nbsp;&nbsp;0,&nbsp;&nbsp;1</spanx></c>
 <c>6</c> <c><spanx style="vbare">&nbsp;0,&nbsp;&nbsp;0,&nbsp;&nbsp;1,&nbsp;&nbsp;1</spanx></c>
 <c>7</c> <c><spanx style="vbare">&nbsp;1,&nbsp;&nbsp;1,&nbsp;&nbsp;0,&nbsp;&nbsp;0</spanx></c>
 <c>8</c> <c><spanx style="vbare">&nbsp;1,&nbsp;&nbsp;0,&nbsp;&nbsp;0,&nbsp;&nbsp;0</spanx></c>
 <c>9</c> <c><spanx style="vbare">&nbsp;0,&nbsp;&nbsp;0,&nbsp;&nbsp;0,&nbsp;-1</spanx></c>
<c>10</c> <c><spanx style="vbare">&nbsp;1,&nbsp;&nbsp;0,&nbsp;&nbsp;0,&nbsp;-1</spanx></c>
</texttable>

<texttable anchor="silk_pitch_contour_cb_mbwb10ms"
 title="Codebook Vectors for Subframe Pitch Contour: MB or WB, 10&nbsp;ms Frames">
<ttcol>Index</ttcol>
<ttcol align="right">Subframe Offsets</ttcol>
 <c>0</c> <c><spanx style="vbare">&nbsp;0,&nbsp;&nbsp;0</spanx></c>
 <c>1</c> <c><spanx style="vbare">&nbsp;0,&nbsp;&nbsp;1</spanx></c>
 <c>2</c> <c><spanx style="vbare">&nbsp;1,&nbsp;&nbsp;0</spanx></c>
 <c>3</c> <c><spanx style="vbare">-1,&nbsp;&nbsp;1</spanx></c>
 <c>4</c> <c><spanx style="vbare">&nbsp;1,&nbsp;-1</spanx></c>
 <c>5</c> <c><spanx style="vbare">-1,&nbsp;&nbsp;2</spanx></c>
 <c>6</c> <c><spanx style="vbare">&nbsp;2,&nbsp;-1</spanx></c>
 <c>7</c> <c><spanx style="vbare">-2,&nbsp;&nbsp;2</spanx></c>
 <c>8</c> <c><spanx style="vbare">&nbsp;2,&nbsp;-2</spanx></c>
 <c>9</c> <c><spanx style="vbare">-2,&nbsp;&nbsp;3</spanx></c>
<c>10</c> <c><spanx style="vbare">&nbsp;3,&nbsp;-2</spanx></c>
<c>11</c> <c><spanx style="vbare">-3,&nbsp;&nbsp;3</spanx></c>
</texttable>

<texttable anchor="silk_pitch_contour_cb_mbwb20ms"
 title="Codebook Vectors for Subframe Pitch Contour: MB or WB, 20&nbsp;ms Frames">
<ttcol>Index</ttcol>
<ttcol align="right">Subframe Offsets</ttcol>
 <c>0</c> <c><spanx style="vbare">&nbsp;0,&nbsp;&nbsp;0,&nbsp;&nbsp;0,&nbsp;&nbsp;0</spanx></c>
 <c>1</c> <c><spanx style="vbare">&nbsp;0,&nbsp;&nbsp;0,&nbsp;&nbsp;1,&nbsp;&nbsp;1</spanx></c>
 <c>2</c> <c><spanx style="vbare">&nbsp;1,&nbsp;&nbsp;1,&nbsp;&nbsp;0,&nbsp;&nbsp;0</spanx></c>
 <c>3</c> <c><spanx style="vbare">-1,&nbsp;&nbsp;0,&nbsp;&nbsp;0,&nbsp;&nbsp;0</spanx></c>
 <c>4</c> <c><spanx style="vbare">&nbsp;0,&nbsp;&nbsp;0,&nbsp;&nbsp;0,&nbsp;&nbsp;1</spanx></c>
 <c>5</c> <c><spanx style="vbare">&nbsp;1,&nbsp;&nbsp;0,&nbsp;&nbsp;0,&nbsp;&nbsp;0</spanx></c>
 <c>6</c> <c><spanx style="vbare">-1,&nbsp;&nbsp;0,&nbsp;&nbsp;0,&nbsp;&nbsp;1</spanx></c>
 <c>7</c> <c><spanx style="vbare">&nbsp;0,&nbsp;&nbsp;0,&nbsp;&nbsp;0,&nbsp;-1</spanx></c>
 <c>8</c> <c><spanx style="vbare">-1,&nbsp;&nbsp;0,&nbsp;&nbsp;1,&nbsp;&nbsp;2</spanx></c>
 <c>9</c> <c><spanx style="vbare">&nbsp;1,&nbsp;&nbsp;0,&nbsp;&nbsp;0,&nbsp;-1</spanx></c>
<c>10</c> <c><spanx style="vbare">-2,&nbsp;-1,&nbsp;&nbsp;1,&nbsp;&nbsp;2</spanx></c>
<c>11</c> <c><spanx style="vbare">&nbsp;2,&nbsp;&nbsp;1,&nbsp;&nbsp;0,&nbsp;-1</spanx></c>
<c>12</c> <c><spanx style="vbare">-2,&nbsp;&nbsp;0,&nbsp;&nbsp;0,&nbsp;&nbsp;2</spanx></c>
<c>13</c> <c><spanx style="vbare">-2,&nbsp;&nbsp;0,&nbsp;&nbsp;1,&nbsp;&nbsp;3</spanx></c>
<c>14</c> <c><spanx style="vbare">&nbsp;2,&nbsp;&nbsp;1,&nbsp;-1,&nbsp;-2</spanx></c>
<c>15</c> <c><spanx style="vbare">-3,&nbsp;-1,&nbsp;&nbsp;1,&nbsp;&nbsp;3</spanx></c>
<c>16</c> <c><spanx style="vbare">&nbsp;2,&nbsp;&nbsp;0,&nbsp;&nbsp;0,&nbsp;-2</spanx></c>
<c>17</c> <c><spanx style="vbare">&nbsp;3,&nbsp;&nbsp;1,&nbsp;&nbsp;0,&nbsp;-2</spanx></c>
<c>18</c> <c><spanx style="vbare">-3,&nbsp;-1,&nbsp;&nbsp;2,&nbsp;&nbsp;4</spanx></c>
<c>19</c> <c><spanx style="vbare">-4,&nbsp;-1,&nbsp;&nbsp;1,&nbsp;&nbsp;4</spanx></c>
<c>20</c> <c><spanx style="vbare">&nbsp;3,&nbsp;&nbsp;1,&nbsp;-1,&nbsp;-3</spanx></c>
<c>21</c> <c><spanx style="vbare">-4,&nbsp;-1,&nbsp;&nbsp;2,&nbsp;&nbsp;5</spanx></c>
<c>22</c> <c><spanx style="vbare">&nbsp;4,&nbsp;&nbsp;2,&nbsp;-1,&nbsp;-3</spanx></c>
<c>23</c> <c><spanx style="vbare">&nbsp;4,&nbsp;&nbsp;1,&nbsp;-1,&nbsp;-4</spanx></c>
<c>24</c> <c><spanx style="vbare">-5,&nbsp;-1,&nbsp;&nbsp;2,&nbsp;&nbsp;6</spanx></c>
<c>25</c> <c><spanx style="vbare">&nbsp;5,&nbsp;&nbsp;2,&nbsp;-1,&nbsp;-4</spanx></c>
<c>26</c> <c><spanx style="vbare">-6,&nbsp;-2,&nbsp;&nbsp;2,&nbsp;&nbsp;6</spanx></c>
<c>27</c> <c><spanx style="vbare">-5,&nbsp;-2,&nbsp;&nbsp;2,&nbsp;&nbsp;5</spanx></c>
<c>28</c> <c><spanx style="vbare">&nbsp;6,&nbsp;&nbsp;2,&nbsp;-1,&nbsp;-5</spanx></c>
<c>29</c> <c><spanx style="vbare">-7,&nbsp;-2,&nbsp;&nbsp;3,&nbsp;&nbsp;8</spanx></c>
<c>30</c> <c><spanx style="vbare">&nbsp;6,&nbsp;&nbsp;2,&nbsp;-2,&nbsp;-6</spanx></c>
<c>31</c> <c><spanx style="vbare">&nbsp;5,&nbsp;&nbsp;2,&nbsp;-2,&nbsp;-5</spanx></c>
<c>32</c> <c><spanx style="vbare">&nbsp;8,&nbsp;&nbsp;3,&nbsp;-2,&nbsp;-7</spanx></c>
<c>33</c> <c><spanx style="vbare">-9,&nbsp;-3,&nbsp;&nbsp;3,&nbsp;&nbsp;9</spanx></c>
</texttable>

<t>
The final pitch lag for each subframe is assembled in silk_decode_pitch()
 (silk_decode_pitch.c).
Let lag be the primary pitch lag for the current SILK frame, contour_index be
 index of the VQ codebook, and lag_cb[contour_index][k] be the corresponding
 entry of the codebook from the appropriate table given above for the
 <spanx style="emph">k</spanx>th subframe.
Then the final pitch lag for that subframe is
<figure align="center">
<artwork align="center"><![CDATA[
pitch_lags[k] = clamp(lag_min, lag + lag_cb[contour_index][k],
                      lag_max)
]]></artwork>
</figure>
 where lag_min and lag_max are the values from the "Minimum Lag" and
 "Maximum Lag" columns of <xref target="silk_abs_pitch_low_pdf"/>,
 respectively.
</t>

</section>

</section>

</section>

<section title="LBRR Frames">
<t>
LBRR frames, if present, immediately follow the header bits, prior to any
 regular SILK frames.
Each frame whose LBRR flag was set includes a separate set of data for each
 channel.
</t>
</section>

</section>


<section title="CELT Decoder">

<t>
The CELT layer is decoded based on the following symbols and sets of symbols:
</t>

<texttable anchor='table_example'>
<ttcol align='center'>Symbol(s)</ttcol>
<ttcol align='center'>PDF</ttcol>
<ttcol align='center'>Condition</ttcol>
<c>silence</c>      <c>{32767, 1}/32768</c> <c></c>
<c>post-filter</c>  <c>{1, 1}/2</c> <c></c>
<c>octave</c>       <c>uniform (6)</c><c>post-filter</c>
<c>period</c>       <c>raw bits (4+octave)</c><c>post-filter</c>
<c>gain</c>         <c>raw bits (3)</c><c>post-filter</c>
<c>tapset</c>       <c>{2, 1, 1}/4</c><c>post-filter</c>
<c>transient</c>    <c>{7, 1}/8</c><c></c>
<c>intra</c>        <c>{7, 1}/8</c><c></c>
<c>coarse energy</c><c><xref target="energy-decoding"/></c><c></c>
<c>tf_change</c>    <c><xref target="transient-decoding"/></c><c></c>
<c>tf_select</c>    <c>{1, 1}/2</c><c><xref target="transient-decoding"/></c>
<c>spread</c>       <c>{7, 2, 21, 2}/32</c><c></c>
<c>dyn. alloc.</c>  <c><xref target="allocation"/></c><c></c>
<c>alloc. trim</c>  <c>{2, 2, 5, 10, 22, 46, 22, 10, 5, 2, 2}/128</c><c></c>
<c>skip</c>         <c>{1, 1}/2</c><c><xref target="allocation"/></c>
<c>intensity</c>    <c>uniform</c><c><xref target="allocation"/></c>
<c>dual</c>         <c>{1, 1}/2</c><c></c>
<c>fine energy</c>  <c><xref target="energy-decoding"/></c><c></c>
<c>residual</c>     <c><xref target="PVQ-decoder"/></c><c></c>
<c>anti-collapse</c><c>{1, 1}/2</c><c><xref target="anti-collapse"/></c>
<c>finalize</c>     <c><xref target="energy-decoding"/></c><c></c>
<postamble>Order of the symbols in the CELT section of the bit-stream.</postamble>
</texttable>

<t>
The decoder extracts information from the range-coded bit-stream in the order
described in the figure above. In some circumstances, it is
possible for a decoded value to be out of range due to a very small amount of redundancy
in the encoding of large integers by the range coder.
In that case, the decoder should assume there has been an error in the coding,
decoding, or transmission and SHOULD take measures to conceal the error and/or report
to the application that a problem has occurred.
</t>

<section anchor="transient-decoding" title="Transient Decoding">
<t>
The <spanx style="emph">transient</spanx> flag encoded in the bit-stream has a
probability of 1/8. When it is set, then the MDCT coefficients represent multiple
short MDCTs in the frame. When not set, the coefficients represent a single
long MDCT for the frame. In addition to the global transient flag is a per-band
binary flag to change the time-frequency (tf) resolution independently in each band. The
change in tf resolution is defined in tf_select_table[][] in celt.c and depends
on the frame size, whether the transient flag is set, and the value of tf_select.
The tf_select flag uses a 1/2 probability, but is only decoded
if it can have an impact on the result knowing the value of all per-band
tf_change flags.
</t>
</section>

<section anchor="energy-decoding" title="Energy Envelope Decoding">

<t>
It is important to quantize the energy with sufficient resolution because
any energy quantization error cannot be compensated for at a later
stage. Regardless of the resolution used for encoding the shape of a band,
it is perceptually important to preserve the energy in each band. CELT uses a
three-step coarse-fine-fine strategy for encoding the energy in the base-2 log
domain, as implemented in quant_bands.c</t>

<section anchor="coarse-energy-decoding" title="Coarse energy decoding">
<t>
Coarse quantization of the energy uses a fixed resolution of 6 dB
(integer part of base-2 log). To minimize the bitrate, prediction is applied
both in time (using the previous frame) and in frequency (using the previous
bands). The part of the prediction that is based on the
previous frame can be disabled, creating an "intra" frame where the energy
is coded without reference to prior frames. The decoder first reads the intra flag
to determine what prediction is used.
The 2-D z-transform of
the prediction filter is: A(z_l, z_b)=(1-a*z_l^-1)*(1-z_b^-1)/(1-b*z_b^-1)
where b is the band index and l is the frame index. The prediction coefficients
applied depend on the frame size in use when not using intra energy and a=0 b=4915/32768
when using intra energy.
The time-domain prediction is based on the final fine quantization of the previous
frame, while the frequency domain (within the current frame) prediction is based
on coarse quantization only (because the fine quantization has not been computed
yet). The prediction is clamped internally so that fixed point implementations with
limited dynamic range to not suffer desynchronization.
We approximate the ideal
probability distribution of the prediction error using a Laplace distribution
with seperate parameters for each frame size in intra and inter-frame modes. The
coarse energy quantization is performed by unquant_coarse_energy() and
unquant_coarse_energy_impl() (quant_bands.c). The encoding of the Laplace-distributed values is
implemented in ec_laplace_decode() (laplace.c).
</t>

</section>

<section anchor="fine-energy-decoding" title="Fine energy quantization">
<t>
The number of bits assigned to fine energy quantization in each band is determined
by the bit allocation computation described in <xref target="allocation"></xref>.
Let B_i be the number of fine energy bits
for band i; the refinement is an integer f in the range [0,2^B_i-1]. The mapping between f
and the correction applied to the coarse energy is equal to (f+1/2)/2^B_i - 1/2. Fine
energy quantization is implemented in quant_fine_energy() (quant_bands.c).
</t>
<t>
When some bits are left "unused" after all other flags have been decoded, these bits
are assigned to a "final" step of fine allocation. In effect, these bits are used
to add one extra fine energy bit per band per channel. The allocation process
determines two <spanx style="emph">priorities</spanx> for the final fine bits.
Any remaining bits are first assigned only to bands of priority 0, starting
from band 0 and going up. If all bands of priority 0 have received one bit per
channel, then bands of priority 1 are assigned an extra bit per channel,
starting from band 0. If any bit is left after this, they are left unused.
This is implemented in unquant_energy_finalise() (quant_bands.c).
</t>

</section> <!-- fine energy -->

</section> <!-- Energy decode -->

<section anchor="allocation" title="Bit allocation">
<t>Many codecs transmit significant amounts of side information for
the purpose of controlling bit allocation within a frame. Often this
side information controls bit usage indirectly and must be carefully
selected to achieve the desired rate constraints.</t>

<t>The band-energy normalized structure of Opus MDCT mode ensures that a
constant bit allocation for the shape content of a band will result in a
roughly constant tone to noise ratio, which provides for fairly consistent
perceptual performance. The effectiveness of this approach is the result of
two factors: The band energy, which is understood to be perceptually
important on its own, is always preserved regardless of the shape precision and because
the constant tone-to-noise ratio implies a constant intra-band noise to masking ratio.
Intra-band masking is the strongest of the perceptual masking effects. This structure
means that the ideal allocation is more consistent from frame to frame than
it is for other codecs without an equivalent structure.</t>

<t>Because the bit allocation is used to drive the decoding of the range-coder
stream it MUST be recovered exactly so that identical coding decisions are
made in the encoder and decoder. Any deviation from the reference's resulting
bit allocation will result in corrupted output, though implementers are
free to implement the procedure in any way which produces identical results.</t>

<t>Because all of the information required to decode a frame must be derived
from that frame alone in order to retain robustness to packet loss the
overhead of explicitly signaling the allocation would be considerable,
especially for low-latency (small frame size) applications,
even though the allocation is relatively static.</t>

<t>For this reason, in the MDCT mode Opus uses a primarily implicit bit
allocation. The available bit-stream capacity is known in advance to both
the encoder and decoder without additional signaling, ultimately from the
packet sizes expressed by a higher level protocol. Using this information
the codec interpolates an allocation from a hard-coded table.</t>

<t>While the band-energy structure effectively models intra-band masking,
it ignores the weaker inter-band masking, band-temporal masking, and
other less significant perceptual effects. While these effects can
often be ignored they can become significant for particular samples. One
mechanism available to encoders would be to simply increase the overall
rate for these frames, but this is not possible in a constant rate mode
and can be fairly inefficient. As a result three explicitly signaled
mechanisms are provided to alter the implicit allocation:</t>

<t>
<list style="symbols">
<t>Band boost</t>
<t>Allocation trim</t>
<t>band skipping</t>
</list>
</t>

<t>The first of these mechanisms, band boost, allows an encoder to boost
the allocation in specific bands. The second, allocation trim, works by
biasing the overall allocation towards higher or lower frequency bands. The third, band
skipping, selects which low-precision high frequency bands
will be allocated no shape bits at all.</t>

<t>In stereo mode there are also two additional parameters
potentially coded as part of the allocation procedure: a parameter to allow the
selective elimination of allocation for the 'side' in jointly coded bands,
and a flag to deactivate joint coding. These values are not signaled if
they would be meaningless in the overall context of the allocation.</t>

<t>Because every signaled adjustment increases overhead and implementation
complexity none were included speculatively: The reference encoder makes use
of all of these mechanisms. While the decision logic in the reference was
found to be effective enough to justify the overhead and complexity further
analysis techniques may be discovered which increase the effectiveness of these
parameters. As with other signaled parameters, encoder is free to choose the
values in any manner but unless a technique is known to deliver superior
perceptual results the methods used by the reference implementation should be
used.</t>

<t>The process of allocation consists of the following steps: determining the per-band
maximum allocation vector, decoding the boosts, decoding the tilt, determining
the remaining capacity the frame, searching the mode table for the
entry nearest but not exceeding the available space (subject to the tilt, boosts, band
maximums, and band minimums), linear interpolation, reallocation of
unused bits with concurrent skip decoding, determination of the
fine-energy vs shape split, and final reallocation. This process results
in an shape allocation per-band (in 1/8th bit units), a per-band fine-energy
allocation (in 1 bit per channel units), a set of band priorities for
controlling the use of remaining bits at the end of the frame, and a
remaining balance of unallocated space which is usually zero except
at very high rates.</t>

<t>The maximum allocation vector is an approximation of the maximum space
which can be used by each band for a given mode. The value is
approximate because the shape encoding is variable rate (due
to entropy coding of splitting parameters). Setting the maximum too low reduces the
maximum achievable quality in a band while setting it too high
may result in waste: bit-stream capacity available at the end
of the frame which can not be put to any use. The maximums
specified by the codec reflect the average maximum. In the reference
the maximums are provided partially computed form, in order to fit in less
memory, as a static table (XXX cache.caps). Implementations are expected
to simply use the same table data but the procedure for generating
this table is included in rate.c as part of compute_pulse_cache().</t>

<t>To convert the values in cache.caps into the actual maximums: First
set nbBands to the maximum number of bands for this mode and stereo to
zero if stereo is not in use and one otherwise. For each band assign N
to the number of MDCT bins covered by the band (for one channel), set LM
to the shift value for the frame size (e.g. 0 for 120, 1 for 240, 3 for 480)
then set i to nbBands*(2*LM+stereo). Then set the maximum for the band to
the i-th index of cache.caps + 64 and multiply by the number of channels
in the current frame (one or two) and by N then divide the result by 4
using truncating integer division. The resulting vector will be called
cap[]. The elements fit in signed 16 bit integers but do not fit in 8 bits.
This procedure is implemented in the reference in the function init_caps() in celt.c.
</t>

<t>The band boosts are represented by a series of binary symbols which
are coded with very low probability. Each band can potentially be boosted
multiple times, subject to the frame actually having enough room to obey
the boost and having enough room to code the boost symbol. The default
coding cost for a boost starts out at six bits, but subsequent boosts
in a band cost only a single bit and every time a band is boosted the
initial cost is reduced (down to a minimum of two). Since the initial
cost of coding a boost is 6 bits the coding cost of the boost symbols when
completely unused is 0.48 bits/frame for a 21 band mode (21*-log2(1-1/2^6)).</t>

<t>To decode the band boosts: First set 'dynalloc_logp' to 6, the initial
amount of storage required to signal a boost in bits, 'total_bits' to the
size of the frame in 8th-bits, 'total_boost' to zero, and 'tell' to the total number
of 8th bits decoded
so far. For each band from the coding start (0 normally, but 17 in hybrid mode)
to the coding end (which changes depending on the signaled bandwidth): Set 'width'
to the number of MDCT bins in this band for all channels. Take the larger of width
and 64, then the minimum of that value and the width times eight and set 'quanta'
to the result. This represents a boost step size of six bits subject to limits
of 1/bit/sample and 1/8th bit/sample. Set 'boost' to zero and 'dynalloc_loop_logp'
to dynalloc_logp. While dynalloc_loop_log (the current worst case symbol cost) in
8th bits plus tell is less than total_bits plus total_boost and boost is less than cap[] for this
band: Decode a bit from the bitstream with a with dynalloc_loop_logp as the cost
of a one, update tell to reflect the current used capacity, if the decoded value
is zero break the  loop otherwise add quanta to boost and total_boost, subtract quanta from
total_bits, and set dynalloc_loop_log to 1. When the while loop finishes
boost contains the boost for this band. If boost is non-zero and dynalloc_logp
is greater than 2 decrease dynalloc_logp.  Once this process has been
execute on all bands the band boosts have been decoded. This procedure
is implemented around line 2352 of celt.c.</t>

<t>At very low rates it's possible that there won't be enough available
space to execute the inner loop even once. In these cases band boost
is not possible but its overhead is completely eliminated. Because of the
high cost of band boost when activated a reasonable encoder should not be
using it at very low rates. The reference implements its dynalloc decision
logic at around 1269 of celt.c</t>

<t>The allocation trim is a integer value from 0-10. The default value of
5 indicates no trim. The trim parameter is entropy coded in order to
lower the coding cost of less extreme adjustments. Values lower than
5 bias the allocation towards lower frequencies and values above 5
bias it towards higher frequencies. Like other signaled parameters, signaling
of the trim is gated so that it is not included if there is insufficient space
available in the bitstream. To decode the trim first set
the trim value to 5 then iff the count of decoded 8th bits so far (ec_tell_frac)
plus 48 (6 bits) is less than or equal to the total frame size in 8th
bits minus total_boost (a product of the above band boost procedure) then
decode the trim value using the inverse CDF {127, 126, 124, 119, 109, 87, 41, 19, 9, 4, 2, 0}.</t>

<t>Stereo parameters</t>

<t>Anti-collapse reservation</t>

<t>The allocation computation first begins by setting up some initial conditions.
'total' is set to the available remaining 8th bits, computed by taking the
size of the coded frame times 8 and subtracting ec_tell_frac(). From this value one (8th bit)
is subtracted to assure that the resulting allocation will be conservative. 'anti_collapse_rsv'
is set to 8 (8th bits) iff the frame is a transient, LM is greater than 1, and total is
greater than or equal to (LM+2) * 8. Total is then decremented by anti_collapse_rsv and clamped
to be equal to or greater than zero. 'skip_rsv' is set to 8 (8th bits) if total is greater than
8, otherwise it is zero. Total is then decremented by skip_rsv. This reserves space for the
final skipping flag.</t>

<t>If the current frame is stereo intensity_rsv is set to the conservative log2 in 8th bits
of the number of coded bands for this frame (given by the table LOG2_FRAC_TABLE). If
intensity_rsv is greater than total then intensity_rsv is set to zero otherwise total is
decremented by intensity_rsv, and if total is still greater than 8 dual_stereo_rsv is
set to 8 and total is decremented by dual_stereo_rsv.</t>

<t>The allocation process then computes a vector representing the hard minimum amounts allocation
any band will receive for shape. This minimum is higher than the technical limit of the PVQ
process, but very low rate allocations produce excessively an sparse spectrum and these bands
are better served by having no allocation at all. For each coded band set thresh[band] to
twenty-four times the number of MDCT bins in the band and divide by 16. If 8 times the number
of channels is greater, use that instead. This sets the minimum allocation to one bit per channel
or 48 128th bits per MDCT bin, whichever is greater. The band size dependent part of this
value is not scaled by the channel count because at the very low rates where this limit is
applicable there will usually be no bits allocated to the side.</t>

<t>The previously decoded allocation trim is used to derive a vector of per-band adjustments,
'trim_offsets[]'. For each coded band take the alloc_trim and subtract 5 and LM then multiply
the result by number of channels, the number MDCT bins in the shortest frame size for this mode,
the number remaining bands, 2^LM, and 8. Then divide this value by 64. Finally, if the
number of MDCT bins in the band per channel is only one 8 times the number of channels is subtracted
in order to diminish the allocation by one bit because width 1 bands receive greater benefit
from the coarse energy coding.</t>


</section>

<section anchor="PVQ-decoder" title="Shape Decoder">
<t>
In each band, the normalized <spanx style="emph">shape</spanx> is encoded
using a vector quantization scheme called a "Pyramid vector quantizer".
</t>

<t>In
the simplest case, the number of bits allocated in
<xref target="allocation"></xref> is converted to a number of pulses as described
by <xref target="bits-pulses"></xref>. Knowing the number of pulses and the
number of samples in the band, the decoder calculates the size of the codebook
as detailed in <xref target="cwrs-decoder"></xref>. The size is used to decode
an unsigned integer (uniform probability model), which is the codeword index.
This index is converted into the corresponding vector as explained in
<xref target="cwrs-decoder"></xref>. This vector is then scaled to unit norm.
</t>

<section anchor="bits-pulses" title="Bits to Pulses">
<t>
Although the allocation is performed in 1/8th bit units, the quantization requires
an integer number of pulses K. To do this, the encoder searches for the value
of K that produces the number of bits that is the nearest to the allocated value
(rounding down if exactly half-way between two values), subject to not exceeding
the total number of bits available. For efficiency reasons the search is performed against a
precomputated allocation table which only permits some K values for each N. The number of
codebooks entries can be computed as explained in <xref target="cwrs-encoding"></xref>. The difference
between the number of bits allocated and the number of bits used is accumulated to a
<spanx style="emph">balance</spanx> (initialised to zero) that helps adjusting the
allocation for the next bands. One third of the balance is applied to the
bit allocation of the each band to help achieving the target allocation. The only
exceptions are the band before the last and the last band, for which half the balance
and the whole balance are applied, respectively.
</t>
</section>

<section anchor="cwrs-decoder" title="Index Decoding">

<t>
The codeword is decoded as a uniformly-distributed integer value
by decode_pulses() (cwrs.c).
The codeword is converted from a unique index in the same way as specified in
<xref target="PVQ"></xref>. The indexing is based on the calculation of V(N,K)
(denoted N(L,K) in <xref target="PVQ"></xref>), which is the number of possible
combinations of K pulses
in N samples. The number of combinations can be computed recursively as
V(N,K) = V(N-1,K) + V(N,K-1) + V(N-1,K-1), with V(N,0) = 1 and V(0,K) = 0, K != 0.
There are many different ways to compute V(N,K), including pre-computed tables and direct
use of the recursive formulation. The reference implementation applies the recursive
formulation one line (or column) at a time to save on memory use,
along with an alternate,
univariate recurrence to initialise an arbitrary line, and direct
polynomial solutions for small N. All of these methods are
equivalent, and have different trade-offs in speed, memory usage, and
code size. Implementations MAY use any methods they like, as long as
they are equivalent to the mathematical definition.
</t>

<t>
The decoding of the codeword from the index is performed as specified in
<xref target="PVQ"></xref>, as implemented in function
decode_pulses() (cwrs.c).
</t>
</section>

<section anchor="spreading" title="Spreading">
<t>
</t>
</section>

<section anchor="split" title="Split decoding">
<t>
To avoid the need for multi-precision calculations when decoding PVQ codevectors,
the maximum size allowed for codebooks is 32 bits. When larger codebooks are
needed, the vector is instead split in two sub-vectors of size N/2.
A quantized gain parameter with precision
derived from the current allocation is entropy coded to represent the relative
gains of each side of the split and the entire decoding process is recursively
applied. Multiple levels of splitting may be applied up to a frame size
dependent limit. The same recursive mechanism is applied for the joint coding
of stereo audio.
</t>

</section>

<section anchor="tf-change" title="Time-Frequency change">
<t>
</t>
</section>


</section>

<section anchor="anti-collapse" title="Anti-collapse processing">
<t>
When the frame has the transient bit set, an anti-collapse bit is decoded.
When anti-collapse is set, then the energy in each small MDCT is prevented
from collapsing to zero. For each band of each MDCT where a collapse is
detected, a pseudo-random signal is inserted with an energy corresponding
to the min energy over the two previous frames. A renormalization step is
then required to ensure that the anti-collapse step did not alter the
energy preservation property.
</t>
</section>

<section anchor="denormalization" title="Denormalization">
<t>
Just like each band was normalized in the encoder, the last step of the decoder before
the inverse MDCT is to denormalize the bands. Each decoded normalized band is
multiplied by the square root of the decoded energy. This is done by denormalise_bands()
(bands.c).
</t>
</section>

<section anchor="inverse-mdct" title="Inverse MDCT">
<t>The inverse MDCT implementation has no special characteristics. The
input is N frequency-domain samples and the output is 2*N time-domain
samples, while scaling by 1/2. The output is windowed using the same window
as the encoder. The IMDCT and windowing are performed by mdct_backward
(mdct.c). If a time-domain pre-emphasis
window was applied in the encoder, the (inverse) time-domain de-emphasis window
is applied on the IMDCT result.
</t>

<section anchor="post-filter" title="Post-filter">
<t>
The output of the inverse MDCT (after weighted overlap-add) is sent to the
post-filter. Although the post-filter is applied at the end, the post-filter
parameters are encoded at the beginning, just after the silence flag.
The post-filter can be switched on or off using one bit (logp=1).
If the post-filter is enabled, then the octave is decoded as an integer value
between 0 and 6 of uniform probability. Once the octave is known, the fine pitch
within the octave is decoded using 4+octave raw bits. The final pitch period
is equal to (16&lt;&lt;octave)+fine_pitch-1 so it is bounded between 15 and 1022,
inclusively. Next, the gain is decoded as three raw bits and is equal to
G=3*(int_gain+1)/32. The set of post-filter taps is decoded last using
a pdf equal to {2, 1, 1}/4. Tapset zero corresponds to the filter coefficients
g0 = 0.3066406250, g1 = 0.2170410156, g2 = 0.1296386719. Tapset one
corresponds to the filter coefficients g0 = 0.4638671875, g1 = 0.2680664062,
g2 = 0, and tapset two uses filter coefficients g0 = 0.7998046875,
g1 = 0.1000976562, g2 = 0.
</t>

<t>
The post-filter response is thus computed as:
              <figure align="center">
                <artwork align="center">
                  <![CDATA[
   y(n) = x(n) + G*(g0*y(n-T) + g1*(y(n-T+1)+y(n-T+1))
                              + g2*(y(n-T+2)+y(n-T+2)))
]]>
                </artwork>
              </figure>

During a transition between different gains, a smooth transition is calculated
using the square of the MDCT window. It is important that values of y(n) be
interpolated one at a time such that the past value of y(n) used is interpolated.
</t>
</section>

<section anchor="deemphasis" title="De-emphasis">
<t>
After the post-filter,
the signal is de-emphasized using the inverse of the pre-emphasis filter
used in the encoder: 1/A(z)=1/(1-alpha_p*z^-1), where alpha_p=0.8500061035.
</t>
</section>

</section>

<section anchor="Packet Loss Concealment" title="Packet Loss Concealment (PLC)">
<t>
Packet loss concealment (PLC) is an optional decoder-side feature which
SHOULD be included when transmitting over an unreliable channel. Because
PLC is not part of the bit-stream, there are several possible ways to
implement PLC with different complexity/quality trade-offs. The PLC in
the reference implementation finds a periodicity in the decoded
signal and repeats the windowed waveform using the pitch offset. The windowed
waveform is overlapped in such a way as to preserve the time-domain aliasing
cancellation with the previous frame and the next frame. This is implemented
in celt_decode_lost() (mdct.c).
</t>
</section>

</section>

<section anchor="switching" title="Mode Switching">
<t>
Switching between the Opus coding modes requires careful consideration. More
specifically, the transitions that cannot be easily handled are the ones where
the lower frequencies have to switch between the SILK LP-based model and the CELT
transform model. If nothing is done, a glitch will occur for these transitions.
On the other hand, switching between the SILK-only modes and the hybrid mode
does not require any special treatment.
</t>

<t>
There are two ways to avoid or reduce glitches during the problematic mode 
transitions: with, or without side information. Only transitions with side
information are normatively specified. For transitions with no side
information, it is RECOMMENDED for the decoder to use a concealment technique
(e.g. make use of the PLC algorithm) to "fill in"
the gap or the discontinuity caused by the mode transition. Note that this
concealment MUST NOT be applied when switching between the SILK mode and the
hybrid mode or vice versa. Similarly, it MUST NOT be applied when merely
changing the bandwidth within the same mode.
</t>

<section anchor="side-info" title="Switching Side Information">
<t>
Switching with side information involves transmitting in-band a 5-ms
"redundant" CELT frame within the Opus frame.
This frame is designed to fill-in the gap or discontinuity without requiring
the decoder to conceal it. For transitons from a CELT-only frame to a 
SILK-only or hybrid frame, the redundant frame is inserted in the frame
following the transition (i.e. the SILK-only/hybrid frame). For transitions
from a SILK-only/hybrid frame to a CELT-only frame, the redundant frame is
inserted in the first frame. For all SILK-only and hybrid frames (not only
those involved in a mode transition), a binary symbol of probability 2^-12
needs to be decoded just after the SILK part of the bit-stream. When the
symbol value is 1, then the frame includes an embedded redundant frame. The
redundant frame always starts and ends on byte boundaries. For SILK-only
frames, the number of bytes is simply the number of whole remaining bytes.
For hybrid frames, the number of bytes is equal to 2, plus a decoded unsigned
integer (ec_dec_uint()) between 0 and 255. For hybrid frames, the redundant
frame is placed at the end of the frame, after the CELT layer of the
hybrid frame. The redundant frame is decoded like any other CELT-only frame,
with the exception that it does not contain a TOC byte. The bandwidth
is instead set to the same bandwidth of the current frame (for mediumband 
frames, the redundant frame is set to wideband).
</t>

<t>
For CELT-only to SILK-only/hybrid transitions, the first
2.5 ms of the redundant frame is used as-is for the reconstructed
output. The remaining 2.5 ms is overlapped and added (cross-faded using
the square of the MDCT power-complemantary window) to the decoded SILK/hybrid
signal, ensuring a smooth transition. For SILK-only/hyrid to CELT-only
transitions, only the second half of the 5-ms decoded redundant frame is used.
In that case, only a 2.5-ms cross-fade is applied, still using the 
power-complemantary window.
</t>
</section>

</section>

</section>


<!--  ******************************************************************* -->
<!--  **************************   OPUS ENCODER   *********************** -->
<!--  ******************************************************************* -->

<section title="Codec Encoder">
<t>
Opus encoder block diagram.
<figure>
<artwork>
<![CDATA[
         +----------+    +-------+
         |  sample  |    | SILK  |
      +->|   rate   |--->|encoder|--+
      |  |conversion|    |       |  |
audio |  +----------+    +-------+  |    +-------+
------+                             +--->| Range |
      |  +-------+                       |encoder|---->
      |  | CELT  |                  +--->|       | bit-stream
      +->|encoder|------------------+    +-------+
         |       |
         +-------+
]]>
</artwork>
</figure>
</t>

<section anchor="range-encoder" title="Range Coder">
<t>
The range coder also acts as the bit-packer for Opus. It is
used in three different ways, to encode:
<list style="symbols">
<t>entropy-coded symbols with a fixed probability model using ec_encode(), (entenc.c)</t>
<t>integers from 0 to 2**M-1 using ec_enc_uint() or ec_enc_bits(), (entenc.c)</t>
<t>integers from 0 to N-1 (where N is not a power of two) using ec_enc_uint(). (entenc.c)</t>
</list>
</t>

<t>
The range encoder maintains an internal state vector composed of the
four-tuple (low,rng,rem,ext), representing the low end of the current
range, the size of the current range, a single buffered output octet,
and a count of additional carry-propagating output octets. Both rng
and low are 32-bit unsigned integer values, rem is an octet value or
the special value -1, and ext is an integer with at least 16 bits.
This state vector is initialized at the start of each each frame to
the value (0,2**31,-1,0). The reference implementation re-uses the
'val' field of the entropy coder structure to hold low, in order to
allow the same structure to be used for encoding and decoding, but
we maintain the distinction here for clarity.
</t>

<section anchor="encoding-symbols" title="Encoding Symbols">
<t>
   The main encoding function is ec_encode() (entenc.c),
   which takes as an argument a three-tuple (fl,fh,ft)
   describing the range of the symbol to be encoded in the current
   context, with 0 &lt;= fl &lt; fh &lt;= ft &lt;= 65535. The values of this tuple
   are derived from the probability model for the symbol. Let f(i) be
   the frequency of the ith symbol in the current context. Then the
   three-tuple corresponding to the kth symbol is given by
   <![CDATA[
fl=sum(f(i),i<k), fh=fl+f(i), and ft=sum(f(i)).
]]>
</t>
<t>
   ec_encode() updates the state of the encoder as follows. If fl is
   greater than zero, then low = low + rng - (rng/ft)*(ft-fl) and
   rng = (rng/ft)*(fh-fl). Otherwise, low is unchanged and
   rng = rng - (rng/ft)*(fh-fl). The divisions here are exact integer
   division. After this update, the range is normalized.
</t>
<t>
   To normalize the range, the following process is repeated until
   rng &gt; 2**23. First, the top 9 bits of low, (low&gt;&gt;23), are placed into
   a carry buffer. Then, low is set to <![CDATA[(low << 8 & 0x7FFFFFFF) and rng
   is set to (rng<<8)]]>. This process is carried out by
   ec_enc_normalize() (entenc.c).
</t>
<t>
   The 9 bits produced in each iteration of the normalization loop
   consist of 8 data bits and a carry flag. The final value of the
   output bits is not determined until carry propagation is accounted
   for. Therefore the reference implementation buffers a single
   (non-propagating) output octet and keeps a count of additional
   propagating (0xFF) output octets. An implementation MAY choose to use
   any mathematically equivalent scheme to perform carry propagation.
</t>
<t>
   The function ec_enc_carry_out() (entenc.c) performs
   this buffering. It takes a 9-bit input value, c, from the normalization:
   8 bits of output and a carry bit. If c is 0xFF, then ext is incremented
   and no octets are output. Otherwise, if rem is not the special value
   -1, then the octet (rem+(c>>8)) is output. Then ext octets are output
   with the value 0 if the carry bit is set, or 0xFF if it is not, and
   rem is set to the lower 8 bits of c. After this, ext is set to zero.
</t>
<t>
   In the reference implementation, a special version of ec_encode()
   called ec_encode_bin() (entenc.c) is defined to
   take a two-tuple (fl,ftb), where <![CDATA[0 <= fl < 2**ftb and ftb < 16. It is
   mathematically equivalent to calling ec_encode() with the three-tuple
   (fl,fl+1,1<<ftb)]]>, but avoids using division.

</t>
</section>

<section anchor="encoding-bits" title="Encoding Raw Bits">
<t>
   The CELT layer also allows directly encoding a series of raw bits, outside
   of the range coder, implemented in ec_enc_bits() (entenc.c).
   The raw bits are packed at the end of the packet, starting by storing the
   least significant bit of the value to be packed in the least significant bit
   of the last byte, filling up to the most significant bit in
   the last byte, and the continuing in the least significant bit of the
   penultimate byte, and so on.
   This packing may continue into the last byte output by the range coder,
   though the format should render it impossible to overwrite any set bit
   produced by the range coder when the procedure in
   <xref target='encoder-finalizing'/> is followed to finalize the stream.
</t>
</section>

<section anchor="encoding-ints" title="Encoding Uniformly Distributed Integers">
<t>
   The function ec_enc_uint() is based on ec_encode() and encodes one of N
   equiprobable symbols, each with a frequency of 1, where N may be as large as
   2**32-1. Because ec_encode() is limited to a total frequency of 2**16-1, this
   is done by encoding a series of symbols in smaller contexts.
</t>
<t>
   ec_enc_uint() (entenc.c) takes a two-tuple (fl,ft),
   where ft is not necessarily a power of two. Let ftb be the location
   of the highest 1 bit in the two's-complement representation of
   (ft-1), or -1 if no bits are set. If ftb>8, then the top 8 bits of fl
   are encoded using ec_encode() with the three-tuple
   (fl>>ftb-8,(fl>>ftb-8)+1,(ft-1>>ftb-8)+1), and the remaining bits
   are encoded as raw bits. Otherwise, fl is encoded with ec_encode() directly
   using the three-tuple (fl,fl+1,ft).
</t>
</section>

<section anchor="encoder-finalizing" title="Finalizing the Stream">
<t>
   After all symbols are encoded, the stream must be finalized by
   outputting a value inside the current range. Let end be the integer
   in the interval [low,low+rng) with the largest number of trailing
   zero bits, b, such that end+(1&lt;&lt;b)-1 is also in the interval
   [low,low+rng). Then while end is not zero, the top 9 bits of end, e.g.,
   <![CDATA[(end>>23), are sent to the carry buffer, and end is replaced by
   (end<<8&0x7FFFFFFF). Finally, if the value in carry buffer, rem, is]]>
   neither zero nor the special value -1, or the carry count, ext, is
   greater than zero, then 9 zero bits are sent to the carry buffer.
   After the carry buffer is finished outputting octets, the rest of the
   output buffer (if any) is padded with zero bits, until it reaches the raw
   bits. Finally, rem is set to the
   special value -1. This process is implemented by ec_enc_done()
   (entenc.c).
</t>
</section>

<section anchor="encoder-tell" title="Current Bit Usage">
<t>
   The bit allocation routines in Opus need to be able to determine a
   conservative upper bound on the number of bits that have been used
   to encode the current frame thus far. This drives allocation
   decisions and ensures that the range coder and raw bits will not
   overflow the output buffer. This is computed in the
   reference implementation to whole-bit precision by
   the function ec_tell() (entcode.h) and to fractional 1/8th bit
   precision by the function ec_tell_frac() (entcode.c).
   Like all operations in the range coder, it must be implemented in a
   bit-exact manner, and must produce exactly the same value returned by
   the same functions in the decoder after decoding the same symbols.
</t>
</section>

</section>

        <section title='SILK Encoder'>
          <t>
            In the following, we focus on the core encoder and describe its components. For simplicity, we will refer to the core encoder simply as the encoder in the remainder of this document. An overview of the encoder is given in <xref target="encoder_figure" />.
          </t>

          <figure align="center" anchor="encoder_figure">
            <artwork align="center">
              <![CDATA[
                                                              +---+
                               +----------------------------->|   |
        +---------+            |     +---------+              |   |
        |Voice    |            |     |LTP      |              |   |
 +----->|Activity |-----+      +---->|Scaling  |---------+--->|   |
 |      |Detector |  3  |      |     |Control  |<+  12   |    |   |
 |      +---------+     |      |     +---------+ |       |    |   |
 |                      |      |     +---------+ |       |    |   |
 |                      |      |     |Gains    | |  11   |    |   |
 |                      |      |  +->|Processor|-|---+---|--->| R |
 |                      |      |  |  |         | |   |   |    | a |
 |                     \/      |  |  +---------+ |   |   |    | n |
 |                 +---------+ |  |  +---------+ |   |   |    | g |
 |                 |Pitch    | |  |  |LSF      | |   |   |    | e |
 |              +->|Analysis |-+  |  |Quantizer|-|---|---|--->|   |
 |              |  |         |4|  |  |         | | 8 |   |    | E |->
 |              |  +---------+ |  |  +---------+ |   |   |    | n |14
 |              |              |  |   9/\  10|   |   |   |    | c |
 |              |              |  |    |    \/   |   |   |    | o |
 |              |  +---------+ |  |  +----------+|   |   |    | d |
 |              |  |Noise    | +--|->|Prediction|+---|---|--->| e |
 |              +->|Shaping  |-|--+  |Analysis  || 7 |   |    | r |
 |              |  |Analysis |5|  |  |          ||   |   |    |   |
 |              |  +---------+ |  |  +----------+|   |   |    |   |
 |              |              |  |       /\     |   |   |    |   |
 |              |    +---------|--|-------+      |   |   |    |   |
 |              |    |        \/  \/            \/  \/  \/    |   |
 |  +---------+ |    |      +---------+       +------------+  |   |
 |  |High-Pass| |    |      |         |       |Noise       |  |   |
-+->|Filter   |-+----+----->|Prefilter|------>|Shaping     |->|   |
1   |         |      2      |         |   6   |Quantization|13|   |
    +---------+             +---------+       +------------+  +---+

1:  Input speech signal
2:  High passed input signal
3:  Voice activity estimate
4:  Pitch lags (per 5 ms) and voicing decision (per 20 ms)
5:  Noise shaping quantization coefficients
  - Short term synthesis and analysis
    noise shaping coefficients (per 5 ms)
  - Long term synthesis and analysis noise
    shaping coefficients (per 5 ms and for voiced speech only)
  - Noise shaping tilt (per 5 ms)
  - Quantizer gain/step size (per 5 ms)
6:  Input signal filtered with analysis noise shaping filters
7:  Short and long term prediction coefficients
    LTP (per 5 ms) and LPC (per 20 ms)
8:  LSF quantization indices
9:  LSF coefficients
10: Quantized LSF coefficients
11: Processed gains, and synthesis noise shape coefficients
12: LTP state scaling coefficient. Controlling error propagation
   / prediction gain trade-off
13: Quantized signal
14: Range encoded bitstream

]]>
            </artwork>
            <postamble>Encoder block diagram.</postamble>
          </figure>

          <section title='Voice Activity Detection'>
            <t>
              The input signal is processed by a VAD (Voice Activity Detector) to produce a measure of voice activity, and also spectral tilt and signal-to-noise estimates, for each frame. The VAD uses a sequence of half-band filterbanks to split the signal in four subbands: 0 - Fs/16, Fs/16 - Fs/8, Fs/8 - Fs/4, and Fs/4 - Fs/2, where Fs is the sampling frequency, that is, 8, 12, 16, or 24&nbsp;kHz. The lowest subband, from 0 - Fs/16 is high-pass filtered with a first-order MA (Moving Average) filter (with transfer function H(z) = 1-z^(-1)) to reduce the energy at the lowest frequencies. For each frame, the signal energy per subband is computed. In each subband, a noise level estimator tracks the background noise level and an SNR (Signal-to-Noise Ratio) value is computed as the logarithm of the ratio of energy to noise level. Using these intermediate variables, the following parameters are calculated for use in other SILK modules:
              <list style="symbols">
                <t>
                  Average SNR. The average of the subband SNR values.
                </t>

                <t>
                  Smoothed subband SNRs. Temporally smoothed subband SNR values.
                </t>

                <t>
                  Speech activity level. Based on the average SNR and a weighted average of the subband energies.
                </t>

                <t>
                  Spectral tilt. A weighted average of the subband SNRs, with positive weights for the low subbands and negative weights for the high subbands.
                </t>
              </list>
            </t>
          </section>

          <section title='High-Pass Filter'>
            <t>
              The input signal is filtered by a high-pass filter to remove the lowest part of the spectrum that contains little speech energy and may contain background noise. This is a second order ARMA (Auto Regressive Moving Average) filter with a cut-off frequency around 70&nbsp;Hz.
            </t>
            <t>
              In the future, a music detector may also be used to lower the cut-off frequency when the input signal is detected to be music rather than speech.
            </t>
          </section>

          <section title='Pitch Analysis' anchor='pitch_estimator_overview_section'>
            <t>
              The high-passed input signal is processed by the open loop pitch estimator shown in <xref target='pitch_estimator_figure' />.
              <figure align="center" anchor="pitch_estimator_figure">
                <artwork align="center">
                  <![CDATA[
                                 +--------+  +----------+
                                 |2 x Down|  |Time-     |
                              +->|sampling|->|Correlator|     |
                              |  |        |  |          |     |4
                              |  +--------+  +----------+    \/
                              |                    | 2    +-------+
                              |                    |  +-->|Speech |5
    +---------+    +--------+ |                   \/  |   |Type   |->
    |LPC      |    |Down    | |              +----------+ |       |
 +->|Analysis | +->|sample  |-+------------->|Time-     | +-------+
 |  |         | |  |to 8 kHz|                |Correlator|----------->
 |  +---------+ |  +--------+                |__________|          6
 |       |      |                                  |3
 |      \/      |                                 \/
 |  +---------+ |                            +----------+
 |  |Whitening| |                            |Time-     |
-+->|Filter   |-+--------------------------->|Correlator|----------->
1   |         |                              |          |          7
    +---------+                              +----------+

1: Input signal
2: Lag candidates from stage 1
3: Lag candidates from stage 2
4: Correlation threshold
5: Voiced/unvoiced flag
6: Pitch correlation
7: Pitch lags
]]>
                </artwork>
                <postamble>Block diagram of the pitch estimator.</postamble>
              </figure>
              The pitch analysis finds a binary voiced/unvoiced classification, and, for frames classified as voiced, four pitch lags per frame - one for each 5&nbsp;ms subframe - and a pitch correlation indicating the periodicity of the signal. The input is first whitened using a Linear Prediction (LP) whitening filter, where the coefficients are computed through standard Linear Prediction Coding (LPC) analysis. The order of the whitening filter is 16 for best results, but is reduced to 12 for medium complexity and 8 for low complexity modes. The whitened signal is analyzed to find pitch lags for which the time correlation is high. The analysis consists of three stages for reducing the complexity:
              <list style="symbols">
                <t>In the first stage, the whitened signal is downsampled to 4&nbsp;kHz (from 8&nbsp;kHz) and the current frame is correlated to a signal delayed by a range of lags, starting from a shortest lag corresponding to 500&nbsp;Hz, to a longest lag corresponding to 56&nbsp;Hz.</t>

                <t>
                  The second stage operates on a 8&nbsp;kHz signal ( downsampled from 12, 16, or 24&nbsp;kHz ) and measures time correlations only near the lags corresponding to those that had sufficiently high correlations in the first stage. The resulting correlations are adjusted for a small bias towards short lags to avoid ending up with a multiple of the true pitch lag. The highest adjusted correlation is compared to a threshold depending on:
                  <list style="symbols">
                    <t>
                      Whether the previous frame was classified as voiced
                    </t>
                    <t>
                      The speech activity level
                    </t>
                    <t>
                      The spectral tilt.
                    </t>
                  </list>
                  If the threshold is exceeded, the current frame is classified as voiced and the lag with the highest adjusted correlation is stored for a final pitch analysis of the highest precision in the third stage.
                </t>
                <t>
                  The last stage operates directly on the whitened input signal to compute time correlations for each of the four subframes independently in a narrow range around the lag with highest correlation from the second stage.
                </t>
              </list>
            </t>
          </section>

          <section title='Noise Shaping Analysis' anchor='noise_shaping_analysis_overview_section'>
            <t>
              The noise shaping analysis finds gains and filter coefficients used in the prefilter and noise shaping quantizer. These parameters are chosen such that they will fulfil several requirements:
              <list style="symbols">
                <t>Balancing quantization noise and bitrate. The quantization gains determine the step size between reconstruction levels of the excitation signal. Therefore, increasing the quantization gain amplifies quantization noise, but also reduces the bitrate by lowering the entropy of the quantization indices.</t>
                <t>Spectral shaping of the quantization noise; the noise shaping quantizer is capable of reducing quantization noise in some parts of the spectrum at the cost of increased noise in other parts without substantially changing the bitrate. By shaping the noise such that it follows the signal spectrum, it becomes less audible. In practice, best results are obtained by making the shape of the noise spectrum slightly flatter than the signal spectrum.</t>
                <t>Deemphasizing spectral valleys; by using different coefficients in the analysis and synthesis part of the prefilter and noise shaping quantizer, the levels of the spectral valleys can be decreased relative to the levels of the spectral peaks such as speech formants and harmonics. This reduces the entropy of the signal, which is the difference between the coded signal and the quantization noise, thus lowering the bitrate.</t>
                <t>Matching the levels of the decoded speech formants to the levels of the original speech formants; an adjustment gain and a first order tilt coefficient are computed to compensate for the effect of the noise shaping quantization on the level and spectral tilt.</t>
              </list>
            </t>
            <t>
              <figure align="center" anchor="noise_shape_analysis_spectra_figure">
                <artwork align="center">
                  <![CDATA[
  / \   ___
   |   // \\
   |  //   \\     ____
   |_//     \\___//  \\         ____
   | /  ___  \   /    \\       //  \\
 P |/  /   \  \_/      \\_____//    \\
 o |  /     \     ____  \     /      \\
 w | /       \___/    \  \___/  ____  \\___ 1
 e |/                  \       /    \  \
 r |                    \_____/      \  \__ 2
   |                                  \
   |                                   \___ 3
   |
   +---------------------------------------->
                    Frequency

1: Input signal spectrum
2: Deemphasized and level matched spectrum
3: Quantization noise spectrum
]]>
                </artwork>
                <postamble>Noise shaping and spectral de-emphasis illustration.</postamble>
              </figure>
              <xref target='noise_shape_analysis_spectra_figure' /> shows an example of an input signal spectrum (1). After de-emphasis and level matching, the spectrum has deeper valleys (2). The quantization noise spectrum (3) more or less follows the input signal spectrum, while having slightly less pronounced peaks. The entropy, which provides a lower bound on the bitrate for encoding the excitation signal, is proportional to the area between the deemphasized spectrum (2) and the quantization noise spectrum (3). Without de-emphasis, the entropy is proportional to the area between input spectrum (1) and quantization noise (3) - clearly higher.
            </t>

            <t>
              The transformation from input signal to deemphasized signal can be described as a filtering operation with a filter
              <figure align="center">
                <artwork align="center">
                  <![CDATA[
                                     Wana(z)
H(z) = G * ( 1 - c_tilt * z^(-1) ) * -------
                                     Wsyn(z),
            ]]>
                </artwork>
              </figure>
              having an adjustment gain G, a first order tilt adjustment filter with
              tilt coefficient c_tilt, and where
              <figure align="center">
                <artwork align="center">
                  <![CDATA[
               16                                 d
               __                                __
Wana(z) = (1 - \ (a_ana(k) * z^(-k))*(1 - z^(-L) \ b_ana(k)*z^(-k)),
               /_                                /_
               k=1                               k=-d
            ]]>
                </artwork>
              </figure>
              is the analysis part of the de-emphasis filter, consisting of the short-term shaping filter with coefficients a_ana(k), and the long-term shaping filter with coefficients b_ana(k) and pitch lag L. The parameter d determines the number of long-term shaping filter taps.
            </t>

            <t>
              Similarly, but without the tilt adjustment, the synthesis part can be written as
              <figure align="center">
                <artwork align="center">
                  <![CDATA[
               16                                 d
               __                                __
Wsyn(z) = (1 - \ (a_syn(k) * z^(-k))*(1 - z^(-L) \ b_syn(k)*z^(-k)).
               /_                                /_
               k=1                               k=-d
            ]]>
                </artwork>
              </figure>
            </t>
            <t>
              All noise shaping parameters are computed and applied per subframe of 5 milliseconds. First, an LPC analysis is performed on a windowed signal block of 15 milliseconds. The signal block has a look-ahead of 5 milliseconds relative to the current subframe, and the window is an asymmetric sine window. The LPC analysis is done with the autocorrelation method, with an order of 16 for best quality or 12 in low complexity operation. The quantization gain is found as the square-root of the residual energy from the LPC analysis, multiplied by a value inversely proportional to the coding quality control parameter and the pitch correlation.
            </t>
            <t>
              Next we find the two sets of short-term noise shaping coefficients a_ana(k) and a_syn(k), by applying different amounts of bandwidth expansion to the coefficients found in the LPC analysis. This bandwidth expansion moves the roots of the LPC polynomial towards the origo, using the formulas
              <figure align="center">
                <artwork align="center">
                  <![CDATA[
 a_ana(k) = a(k)*g_ana^k, and
 a_syn(k) = a(k)*g_syn^k,
            ]]>
                </artwork>
              </figure>
              where a(k) is the k'th LPC coefficient and the bandwidth expansion factors g_ana and g_syn are calculated as
              <figure align="center">
                <artwork align="center">
                  <![CDATA[
g_ana = 0.94 - 0.02*C, and
g_syn = 0.94 + 0.02*C,
            ]]>
                </artwork>
              </figure>
              where C is the coding quality control parameter between 0 and 1. Applying more bandwidth expansion to the analysis part than to the synthesis part gives the desired de-emphasis of spectral valleys in between formants.
            </t>

            <t>
              The long-term shaping is applied only during voiced frames. It uses three filter taps, described by
              <figure align="center">
                <artwork align="center">
                  <![CDATA[
b_ana = F_ana * [0.25, 0.5, 0.25], and
b_syn = F_syn * [0.25, 0.5, 0.25].
            ]]>
                </artwork>
              </figure>
              For unvoiced frames these coefficients are set to 0. The multiplication factors F_ana and F_syn are chosen between 0 and 1, depending on the coding quality control parameter, as well as the calculated pitch correlation and smoothed subband SNR of the lowest subband. By having F_ana less than F_syn, the pitch harmonics are emphasized relative to the valleys in between the harmonics.
            </t>

            <t>
              The tilt coefficient c_tilt is for unvoiced frames chosen as
              <figure align="center">
                <artwork align="center">
                  <![CDATA[
c_tilt = 0.4, and as
c_tilt = 0.04 + 0.06 * C
            ]]>
                </artwork>
              </figure>
              for voiced frames, where C again is the coding quality control parameter and is between 0 and 1.
            </t>
            <t>
              The adjustment gain G serves to correct any level mismatch between original and decoded signal that might arise from the noise shaping and de-emphasis. This gain is computed as the ratio of the prediction gain of the short-term analysis and synthesis filter coefficients. The prediction gain of an LPC synthesis filter is the square-root of the output energy when the filter is excited by a unit-energy impulse on the input. An efficient way to compute the prediction gain is by first computing the reflection coefficients from the LPC coefficients through the step-down algorithm, and extracting the prediction gain from the reflection coefficients as
              <figure align="center">
                <artwork align="center">
                  <![CDATA[
               K
              ___
 predGain = ( | | 1 - (r_k)^2 )^(-0.5),
              k=1
            ]]>
                </artwork>
              </figure>
              where r_k is the k'th reflection coefficient.
            </t>

            <t>
              Initial values for the quantization gains are computed as the square-root of the residual energy of the LPC analysis, adjusted by the coding quality control parameter. These quantization gains are later adjusted based on the results of the prediction analysis.
            </t>
          </section>

          <section title='Prefilter'>
            <t>
              In the prefilter the input signal is filtered using the spectral valley de-emphasis filter coefficients from the noise shaping analysis, see <xref target='noise_shaping_analysis_overview_section' />. By applying only the noise shaping analysis filter to the input signal, it provides the input to the noise shaping quantizer.
            </t>
          </section>
          <section title='Prediction Analysis' anchor='pred_ana_overview_section'>
            <t>
              The prediction analysis is performed in one of two ways depending on how the pitch estimator classified the frame. The processing for voiced and unvoiced speech are described in <xref target='pred_ana_voiced_overview_section' /> and <xref target='pred_ana_unvoiced_overview_section' />, respectively. Inputs to this function include the pre-whitened signal from the pitch estimator, see <xref target='pitch_estimator_overview_section' />.
            </t>

            <section title='Voiced Speech' anchor='pred_ana_voiced_overview_section'>
              <t>
                For a frame of voiced speech the pitch pulses will remain dominant in the pre-whitened input signal. Further whitening is desirable as it leads to higher quality at the same available bitrate. To achieve this, a Long-Term Prediction (LTP) analysis is carried out to estimate the coefficients of a fifth order LTP filter for each of four subframes. The LTP coefficients are used to find an LTP residual signal with the simulated output signal as input to obtain better modelling of the output signal. This LTP residual signal is the input to an LPC analysis where the LPCs are estimated using Burgs method, such that the residual energy is minimized. The estimated LPCs are converted to a Line Spectral Frequency (LSF) vector, and quantized as described in <xref target='lsf_quantizer_overview_section' />. After quantization, the quantized LSF vector is converted to LPC coefficients and hence by using these quantized coefficients the encoder remains fully synchronized with the decoder. The LTP coefficients are quantized using a method described in <xref target='ltp_quantizer_overview_section' />. The quantized LPC and LTP coefficients are now used to filter the high-pass filtered input signal and measure a residual energy for each of the four subframes.
              </t>
            </section>
            <section title='Unvoiced Speech' anchor='pred_ana_unvoiced_overview_section'>
              <t>
                For a speech signal that has been classified as unvoiced there is no need for LTP filtering as it has already been determined that the pre-whitened input signal is not periodic enough within the allowed pitch period range for an LTP analysis to be worth-while the cost in terms of complexity and rate. Therefore, the pre-whitened input signal is discarded and instead the high-pass filtered input signal is used for LPC analysis using Burgs method. The resulting LPC coefficients are converted to an LSF vector, quantized as described in the following section and transformed back to obtain quantized LPC coefficients. The quantized LPC coefficients are used to filter the high-pass filtered input signal and measure a residual energy for each of the four subframes.
              </t>
            </section>
          </section>

          <section title='LSF Quantization' anchor='lsf_quantizer_overview_section'>
            <t>The purpose of quantization in general is to significantly lower the bit rate at the cost of some introduced distortion. A higher rate should always result in lower distortion, and lowering the rate will generally lead to higher distortion. A commonly used but generally sub-optimal approach is to use a quantization method with a constant rate where only the error is minimized when quantizing.</t>
            <section title='Rate-Distortion Optimization'>
              <t>Instead, we minimize an objective function that consists of a weighted sum of rate and distortion, and use a codebook with an associated non-uniform rate table. Thus, we take into account that the probability mass function for selecting the codebook entries are by no means guaranteed to be uniform in our scenario. The advantage of this approach is that it ensures that rarely used codebook vector centroids, which are modelling statistical outliers in the training set can be quantized with a low error but with a relatively high cost in terms of a high rate. At the same time this approach also provides the advantage that frequently used centroids are modelled with low error and a relatively low rate. This approach will lead to equal or lower distortion than the fixed rate codebook at any given average rate, provided that the data is similar to the data used for training the codebook.</t>
            </section>

            <section title='Error Mapping' anchor='lsf_error_mapping_overview_section'>
              <t>
                Instead of minimizing the error in the LSF domain, we map the errors to better approximate spectral distortion by applying an individual weight to each element in the error vector. The weight vectors are calculated for each input vector using the Inverse Harmonic Mean Weighting (IHMW) function proposed by Laroia et al., see <xref target="laroia-icassp" />.
                Consequently, we solve the following minimization problem, i.e.,
                <figure align="center">
                  <artwork align="center">
                    <![CDATA[
LSF_q = argmin { (LSF - c)' * W * (LSF - c) + mu * rate },
        c in C
            ]]>
                  </artwork>
                </figure>
                where LSF_q is the quantized vector, LSF is the input vector to be quantized, and c is the quantized LSF vector candidate taken from the set C of all possible outcomes of the codebook.
              </t>
            </section>
            <section title='Multi-Stage Vector Codebook'>
              <t>
                We arrange the codebook in a multiple stage structure to achieve a quantizer that is both memory efficient and highly scalable in terms of computational complexity, see e.g. <xref target="sinervo-norsig" />. In the first stage the input is the LSF vector to be quantized, and in any other stage s > 1, the input is the quantization error from the previous stage, see <xref target='lsf_quantizer_structure_overview_figure' />.
              </t>
                <figure align="center" anchor="lsf_quantizer_structure_overview_figure">
                  <artwork align="center">
                    <![CDATA[
      Stage 1:           Stage 2:                Stage S:
    +----------+       +----------+            +----------+
    |  c_{1,1} |       |  c_{2,1} |            |  c_{S,1} |
LSF +----------+ res_1 +----------+  res_{S-1} +----------+
--->|  c_{1,2} |------>|  c_{2,2} |--> ... --->|  c_{S,2} |--->
    +----------+       +----------+            +----------+ res_S =
        ...                ...                     ...      LSF-LSF_q
    +----------+       +----------+            +----------+
    |c_{1,M1-1}|       |c_{2,M2-1}|            |c_{S,MS-1}|
    +----------+       +----------+            +----------+
    | c_{1,M1} |       | c_{2,M2} |            | c_{S,MS} |
    +----------+       +----------+            +----------+
]]>
                  </artwork>
                  <postamble>Multi-Stage LSF Vector Codebook Structure.</postamble>
                </figure>

              <t>
                By storing total of M codebook vectors, i.e.,
                <figure align="center">
                  <artwork align="center">
                    <![CDATA[
     S
    __
M = \  Ms,
    /_
    s=1
]]>
                  </artwork>
                </figure>
                where M_s is the number of vectors in stage s, we obtain a total of
                <figure align="center">
                  <artwork align="center">
                    <![CDATA[
     S
    ___
T = | | Ms
    s=1
]]>
                  </artwork>
                </figure>
                possible combinations for generating the quantized vector. It is for example possible to represent 2**36 uniquely combined vectors using only 216 vectors in memory, as done in SILK for voiced speech at all sample frequencies above 8&nbsp;kHz.
              </t>
            </section>
            <section title='Survivor Based Codebook Search'>
              <t>
                This number of possible combinations is far too high for a full search to be carried out for each frame so for all stages but the last, i.e., s smaller than S, only the best min( L, Ms ) centroids are carried over to stage s+1. In each stage the objective function, i.e., the weighted sum of accumulated bitrate and distortion, is evaluated for each codebook vector entry and the results are sorted. Only the best paths and the corresponding quantization errors are considered in the next stage. In the last stage S the single best path through the multistage codebook is determined. By varying the maximum number of survivors from each stage to the next L, the complexity can be adjusted in real-time at the cost of a potential increase when evaluating the objective function for the resulting quantized vector. This approach scales all the way between the two extremes, L=1 being a greedy search, and the desirable but infeasible full search, L=T/MS. In fact, a performance almost as good as what can be achieved with the infeasible full search can be obtained at a substantially lower complexity by using this approach, see e.g. <xref target='leblanc-tsap' />.
              </t>
            </section>
            <section title='LSF Stabilization' anchor='lsf_stabilizer_overview_section'>
              <t>If the input is stable, finding the best candidate will usually result in the quantized vector also being stable, but due to the multi-stage approach it could in theory happen that the best quantization candidate is unstable and because of this there is a need to explicitly ensure that the quantized vectors are stable. Therefore we apply a LSF stabilization method which ensures that the LSF parameters are within valid range, increasingly sorted, and have minimum distances between each other and the border values that have been pre-determined as the 0.01 percentile distance values from a large training set.</t>
            </section>
            <section title='Off-Line Codebook Training'>
              <t>
                The vectors and rate tables for the multi-stage codebook have been trained by minimizing the average of the objective function for LSF vectors from a large training set.
              </t>
            </section>
          </section>

          <section title='LTP Quantization' anchor='ltp_quantizer_overview_section'>
            <t>
              For voiced frames, the prediction analysis described in <xref target='pred_ana_voiced_overview_section' /> resulted in four sets (one set per subframe) of five LTP coefficients, plus four weighting matrices. Also, the LTP coefficients for each subframe are quantized using entropy constrained vector quantization. A total of three vector codebooks are available for quantization, with different rate-distortion trade-offs. The three codebooks have 10, 20 and 40 vectors and average rates of about 3, 4, and 5 bits per vector, respectively. Consequently, the first codebook has larger average quantization distortion at a lower rate, whereas the last codebook has smaller average quantization distortion at a higher rate. Given the weighting matrix W_ltp and LTP vector b, the weighted rate-distortion measure for a codebook vector cb_i with rate r_i is give by
              <figure align="center">
                <artwork align="center">
                  <![CDATA[
 RD = u * (b - cb_i)' * W_ltp * (b - cb_i) + r_i,
]]>
                </artwork>
              </figure>
              where u is a fixed, heuristically-determined parameter balancing the distortion and rate. Which codebook gives the best performance for a given LTP vector depends on the weighting matrix for that LTP vector. For example, for a low valued W_ltp, it is advantageous to use the codebook with 10 vectors as it has a lower average rate. For a large W_ltp, on the other hand, it is often better to use the codebook with 40 vectors, as it is more likely to contain the best codebook vector.
              The weighting matrix W_ltp depends mostly on two aspects of the input signal. The first is the periodicity of the signal; the more periodic the larger W_ltp. The second is the change in signal energy in the current subframe, relative to the signal one pitch lag earlier. A decaying energy leads to a larger W_ltp than an increasing energy. Both aspects do not fluctuate very fast which causes the W_ltp matrices for different subframes of one frame often to be similar. As a result, one of the three codebooks typically gives good performance for all subframes. Therefore the codebook search for the subframe LTP vectors is constrained to only allow codebook vectors to be chosen from the same codebook, resulting in a rate reduction.
            </t>

            <t>
              To find the best codebook, each of the three vector codebooks is used to quantize all subframe LTP vectors and produce a combined weighted rate-distortion measure for each vector codebook and the vector codebook with the lowest combined rate-distortion over all subframes is chosen. The quantized LTP vectors are used in the noise shaping quantizer, and the index of the codebook plus the four indices for the four subframe codebook vectors are passed on to the range encoder.
            </t>
          </section>


          <section title='Noise Shaping Quantizer'>
            <t>
              The noise shaping quantizer independently shapes the signal and coding noise spectra to obtain a perceptually higher quality at the same bitrate.
            </t>
            <t>
              The prefilter output signal is multiplied with a compensation gain G computed in the noise shaping analysis. Then the output of a synthesis shaping filter is added, and the output of a prediction filter is subtracted to create a residual signal. The residual signal is multiplied by the inverse quantized quantization gain from the noise shaping analysis, and input to a scalar quantizer. The quantization indices of the scalar quantizer represent a signal of pulses that is input to the pyramid range encoder. The scalar quantizer also outputs a quantization signal, which is multiplied by the quantized quantization gain from the noise shaping analysis to create an excitation signal. The output of the prediction filter is added to the excitation signal to form the quantized output signal y(n). The quantized output signal y(n) is input to the synthesis shaping and prediction filters.
            </t>

          </section>

          <section title='Range Encoder'>
            <t>
              Range encoding is a well known method for entropy coding in which a bitstream sequence is continually updated with every new symbol, based on the probability for that symbol. It is similar to arithmetic coding but rather than being restricted to generating binary output symbols, it can generate symbols in any chosen number base. In SILK all side information is range encoded. Each quantized parameter has its own cumulative density function based on histograms for the quantization indices obtained by running a training database.
            </t>

            <section title='Bitstream Encoding Details'>
              <t>
                TBD.
              </t>
            </section>
          </section>
        </section>


<section title="CELT Encoder">
<t>
Copy from CELT draft.
</t>

<section anchor="prefilter" title="Pre-filter">
<t>
Inverse of the post-filter
</t>
</section>


<section anchor="forward-mdct" title="Forward MDCT">

<t>The MDCT implementation has no special characteristics. The
input is a windowed signal (after pre-emphasis) of 2*N samples and the output is N
frequency-domain samples. A <spanx style="emph">low-overlap</spanx> window is used to reduce the algorithmic delay.
It is derived from a basic (full overlap) window that is the same as the one used in the Vorbis codec: W(n)=[sin(pi/2*sin(pi/2*(n+.5)/L))]^2. The low-overlap window is created by zero-padding the basic window and inserting ones in the middle, such that the resulting window still satisfies power complementarity. The MDCT is computed in mdct_forward() (mdct.c), which includes the windowing operation and a scaling of 2/N.
</t>
</section>

<section anchor="normalization" title="Bands and Normalization">
<t>
The MDCT output is divided into bands that are designed to match the ear's critical
bands for the smallest (2.5ms) frame size. The larger frame sizes use integer
multiplies of the 2.5ms layout. For each band, the encoder
computes the energy that will later be encoded. Each band is then normalized by the
square root of the <spanx style="strong">non-quantized</spanx> energy, such that each band now forms a unit vector X.
The energy and the normalization are computed by compute_band_energies()
and normalise_bands() (bands.c), respectively.
</t>
</section>

<section anchor="energy-quantization" title="Energy Envelope Quantization">

<t>
It is important to quantize the energy with sufficient resolution because
any energy quantization error cannot be compensated for at a later
stage. Regardless of the resolution used for encoding the shape of a band,
it is perceptually important to preserve the energy in each band. CELT uses a
coarse-fine strategy for encoding the energy in the base-2 log domain,
as implemented in quant_bands.c</t>

<section anchor="coarse-energy" title="Coarse energy quantization">
<t>
The coarse quantization of the energy uses a fixed resolution of 6 dB.
To minimize the bitrate, prediction is applied both in time (using the previous frame)
and in frequency (using the previous bands). The prediction using the
previous frame can be disabled, creating an "intra" frame where the energy
is coded without reference to prior frames. An encoder is able to choose the
mode used at will based on both loss robustness and efficiency
considerations.
The 2-D z-transform of
the prediction filter is: A(z_l, z_b)=(1-a*z_l^-1)*(1-z_b^-1)/(1-b*z_b^-1)
where b is the band index and l is the frame index. The prediction coefficients
applied depend on the frame size in use when not using intra energy and a=0 b=4915/32768
when using intra energy.
The time-domain prediction is based on the final fine quantization of the previous
frame, while the frequency domain (within the current frame) prediction is based
on coarse quantization only (because the fine quantization has not been computed
yet). The prediction is clamped internally so that fixed point implementations with
limited dynamic range to not suffer desynchronization.  Identical prediction
clamping must be implemented in all encoders and decoders.
We approximate the ideal
probability distribution of the prediction error using a Laplace distribution
with seperate parameters for each frame size in intra and inter-frame modes. The
coarse energy quantization is performed by quant_coarse_energy() and
quant_coarse_energy() (quant_bands.c). The encoding of the Laplace-distributed values is
implemented in ec_laplace_encode() (laplace.c).
</t>

<!-- FIXME: bit budget consideration -->
</section> <!-- coarse energy -->

<section anchor="fine-energy" title="Fine energy quantization">
<t>
After the coarse energy quantization and encoding, the bit allocation is computed
(<xref target="allocation"></xref>) and the number of bits to use for refining the
energy quantization is determined for each band. Let B_i be the number of fine energy bits
for band i; the refinement is an integer f in the range [0,2^B_i-1]. The mapping between f
and the correction applied to the coarse energy is equal to (f+1/2)/2^B_i - 1/2. Fine
energy quantization is implemented in quant_fine_energy()
(quant_bands.c).
</t>

<t>
If any bits are unused at the end of the encoding process, these bits are used to
increase the resolution of the fine energy encoding in some bands. Priority is given
to the bands for which the allocation (<xref target="allocation"></xref>) was rounded
down. At the same level of priority, lower bands are encoded first. Refinement bits
are added until there is no more room for fine energy or until each band
has gained an additional bit of precision or has the maximum fine
energy precision. This is implemented in quant_energy_finalise()
(quant_bands.c).
</t>

</section> <!-- fine energy -->


</section> <!-- Energy quant -->


<section anchor="pvq" title="Spherical Vector Quantization">
<t>CELT uses a Pyramid Vector Quantization (PVQ) <xref target="PVQ"></xref>
codebook for quantizing the details of the spectrum in each band that have not
been predicted by the pitch predictor. The PVQ codebook consists of all sums
of K signed pulses in a vector of N samples, where two pulses at the same position
are required to have the same sign. Thus the codebook includes
all integer codevectors y of N dimensions that satisfy sum(abs(y(j))) = K.
</t>

<t>
In bands where there are sufficient bits allocated the PVQ is used to encode
the unit vector that results from the normalization in
<xref target="normalization"></xref> directly. Given a PVQ codevector y,
the unit vector X is obtained as X = y/||y||, where ||.|| denotes the
L2 norm.
</t>


<section anchor="pvq-search" title="PVQ Search">

<t>
The search for the best codevector y is performed by alg_quant()
(vq.c). There are several possible approaches to the
search with a tradeoff between quality and complexity. The method used in the reference
implementation computes an initial codeword y1 by projecting the residual signal
R = X - p' onto the codebook pyramid of K-1 pulses:
</t>
<t>
y0 = round_towards_zero( (K-1) * R / sum(abs(R)))
</t>

<t>
Depending on N, K and the input data, the initial codeword y0 may contain from
0 to K-1 non-zero values. All the remaining pulses, with the exception of the last one,
are found iteratively with a greedy search that minimizes the normalized correlation
between y and R:
</t>

<t>
J = -R^T*y / ||y||
</t>

<t>
The search described above is considered to be a good trade-off between quality
and computational cost. However, there are other possible ways to search the PVQ
codebook and the implementors MAY use any other search methods.
</t>
</section>


<section anchor="cwrs-encoding" title="Index Encoding">
<t>
The best PVQ codeword is encoded as a uniformly-distributed integer value
by encode_pulses() (cwrs.c).
The codeword is converted from a unique index in the same way as specified in
<xref target="PVQ"></xref>. The indexing is based on the calculation of V(N,K)
(denoted N(L,K) in <xref target="PVQ"></xref>), which is the number of possible
combinations of K pulses in N samples.
</t>

</section>

</section>


<section anchor="stereo" title="Stereo support">
<t>
When encoding a stereo stream, some parameters are shared across the left and right channels, while others are transmitted separately for each channel, or jointly encoded. Only one copy of the flags for the features, transients and pitch (pitch
period and filter parameters) are transmitted. The coarse and fine energy parameters are transmitted separately for each channel. Both the coarse energy and fine energy (including the remaining fine bits at the end of the stream) have the left and right bands interleaved in the stream, with the left band encoded first.
</t>

<t>
The main difference between mono and stereo coding is the PVQ coding of the normalized vectors. In stereo mode, a normalized mid-side (M-S) encoding is used. Let L and R be the normalized vector of a certain band for the left and right channels, respectively. The mid and side vectors are computed as M=L+R and S=L-R and no longer have unit norm.
</t>

<t>
From M and S, an angular parameter theta=2/pi*atan2(||S||, ||M||) is computed. The theta parameter is converted to a Q14 fixed-point parameter itheta, which is quantized on a scale from 0 to 1 with an interval of 2^-qb, where qb is
based the number of bits allocated to the band. From here on, the value of itheta MUST be treated in a bit-exact manner since both the encoder and decoder rely on it to infer the bit allocation.
</t>
<t>
Let m=M/||M|| and s=S/||S||; m and s are separately encoded with the PVQ encoder described in <xref target="pvq"></xref>. The number of bits allocated to m and s depends on the value of itheta.
</t>

</section>


<section anchor="synthesis" title="Synthesis">
<t>
After all the quantization is completed, the quantized energy is used along with the
quantized normalized band data to resynthesize the MDCT spectrum. The inverse MDCT (<xref target="inverse-mdct"></xref>) and the weighted overlap-add are applied and the signal is stored in the <spanx style="emph">synthesis
buffer</spanx>.
The encoder MAY omit this step of the processing if it does not need the decoded output.
</t>
</section>

<section anchor="vbr" title="Variable Bitrate (VBR)">
<t>
Each CELT frame can be encoded in a different number of octets, making it possible to vary the bitrate at will. This property can be used to implement source-controlled variable bitrate (VBR). Support for VBR is OPTIONAL for the encoder, but a decoder MUST be prepared to decode a stream that changes its bitrate dynamically. The method used to vary the bitrate in VBR mode is left to the implementor, as long as each frame can be decoded by the reference decoder.
</t>
</section>

</section>

</section>


<section title="Conformance">

<t>
It is the intention to allow the greatest possible choice of freedom in
implementing the specification. For this reason, outside of a few exceptions
noted in this section, conformance is defined through the reference
implementation of the decoder provided in <xref target="ref-implementation"/>.
Although this document includes an English description of the codec, should
the description contradict the source code of the reference implementation,
the latter shall take precedence.
</t>

<t>
Compliance with this specification means that a decoder's output MUST be
 within the thresholds specified by the opus_compare.c tool in
 <xref target="opus-compare"/> compared to the reference implementation.
</t>

<t>
To complement the Opus specification, the "Opus Custom" codec is defined to
handle special sampling rates and frame rates that are not supported by the
main Opus specification. Use of Opus Custom is discouraged for all but very
special applications for which a frame size different from 2.5, 5, 10, 20 ms is
needed (for either complexity or latency reasons). Such applications will not
be compatible with the "main" Opus codec. In Opus Custom operation,
only the CELT later is available, which is available using the celt_* function
calls in celt.h.
</t>

</section>

<section anchor="security" title="Security Considerations">

<t>
The codec needs to take appropriate security considerations
into account, as outlined in <xref target="DOS"/> and <xref target="SECGUIDE"/>.
It is extremely important for the decoder to be robust against malicious
payloads.
Malicious payloads must not cause the decoder to overrun its allocated memory
 or to take an excessive amount of resources to decode.
Although problems
in encoders are typically rarer, the same applies to the encoder. Malicious
audio stream must not cause the encoder to misbehave because this would
allow an attacker to attack transcoding gateways.
</t>
<t>
The reference implementation contains no known buffer overflow or cases where
 a specially crafted packet or audio segment could cause a significant increase
 in CPU load.
However, on certain CPU architectures where denormalized floating-point
 operations are much slower than normal floating-point operations it is
 possible for some audio content (e.g., silence or near-silence) to cause such
 an increase in CPU load.
Denormals can be introduced by reordering operations in the compiler and depend
 on the target architecture, so it is difficult to guarantee an implementation
 avoids them.
For such architectures, it is RECOMMENDED that one add very small
 floating-point offsets to prevent significant numbers of denormalized
 operations or to configure the hardware to treat denormals as zero (DAZ).
<!--TODO: Add small offsets to what? We should be explicit-->
No such issue exists for the fixed-point reference implementation.
</t>
</section>


<section title="IANA Considerations ">
<t>
This document has no actions for IANA.
</t>
</section>

<section anchor="Acknowledgments" title="Acknowledgments">
<t>
Thanks to all other developers, including Raymond Chen, Soeren Skak Jensen, Gregory Maxwell,
Christopher Montgomery, Karsten Vandborg Soerensen, and Timothy Terriberry. We would also
like to thank Igor Dyakonov, Jan Skoglund for their help with subjective testing of the
Opus codec. Thanks to John Ridges, Keith Yan and many others on the Opus and CELT mailing lists
for their bug reports and feeback.
</t>
</section>

</middle>

<back>

<references title="Informative References">

<reference anchor='SILK'>
<front>
<title>SILK Speech Codec</title>
<author initials='K.' surname='Vos' fullname='K. Vos'>
<organization /></author>
<author initials='S.' surname='Jensen' fullname='S. Jensen'>
<organization /></author>
<author initials='K.' surname='Soerensen' fullname='K. Soerensen'>
<organization /></author>
<date year='2010' month='March' />
<abstract>
<t></t>
</abstract></front>
<seriesInfo name='Internet-Draft' value='draft-vos-silk-01' />
<format type='TXT' target='http://tools.ietf.org/html/draft-vos-silk-01' />
</reference>

      <reference anchor="laroia-icassp">
        <front>
          <title abbrev="Robust and Efficient Quantization of Speech LSP">
            Robust and Efficient Quantization of Speech LSP Parameters Using Structured Vector Quantization
          </title>
          <author initials="R.L." surname="Laroia" fullname="R.">
            <organization/>
          </author>
          <author initials="N.P." surname="Phamdo" fullname="N.">
            <organization/>
          </author>
          <author initials="N.F." surname="Farvardin" fullname="N.">
            <organization/>
          </author>
        </front>
        <seriesInfo name="ICASSP-1991, Proc. IEEE Int. Conf. Acoust., Speech, Signal Processing, pp. 641-644, October" value="1991"/>
      </reference>

      <reference anchor="sinervo-norsig">
        <front>
          <title abbrev="SVQ versus MSVQ">Evaluation of Split and Multistage Techniques in LSF Quantization</title>
          <author initials="U.S." surname="Sinervo" fullname="Ulpu Sinervo">
            <organization/>
          </author>
          <author initials="J.N." surname="Nurminen" fullname="Jani Nurminen">
            <organization/>
          </author>
          <author initials="A.H." surname="Heikkinen" fullname="Ari Heikkinen">
            <organization/>
          </author>
          <author initials="J.S." surname="Saarinen" fullname="Jukka Saarinen">
            <organization/>
          </author>
        </front>
        <seriesInfo name="NORSIG-2001, Norsk symposium i signalbehandling, Trondheim, Norge, October" value="2001"/>
      </reference>

      <reference anchor="leblanc-tsap">
        <front>
          <title>Efficient Search and Design Procedures for Robust Multi-Stage VQ of LPC Parameters for 4&nbsp;kb/s Speech Coding</title>
          <author initials="W.P." surname="LeBlanc" fullname="">
            <organization/>
          </author>
          <author initials="B." surname="Bhattacharya" fullname="">
            <organization/>
          </author>
          <author initials="S.A." surname="Mahmoud" fullname="">
            <organization/>
          </author>
          <author initials="V." surname="Cuperman" fullname="">
            <organization/>
          </author>
        </front>
        <seriesInfo name="IEEE Transactions on Speech and Audio Processing, Vol. 1, No. 4, October" value="1993" />
      </reference>

<reference anchor='CELT'>
<front>
<title>Constrained-Energy Lapped Transform (CELT) Codec</title>
<author initials='J-M.' surname='Valin' fullname='J-M. Valin'>
<organization /></author>
<author initials='T.' surname='Terriberry' fullname='T. Terriberry'>
<organization /></author>
<author initials='G.' surname='Maxwell' fullname='G. Maxwell'>
<organization /></author>
<author initials='C.' surname='Montgomery' fullname='C. Montgomery'>
<organization /></author>
<date year='2010' month='July' />
<abstract>
<t></t>
</abstract></front>
<seriesInfo name='Internet-Draft' value='draft-valin-celt-codec-02' />
<format type='TXT' target='http://tools.ietf.org/html/draft-valin-celt-codec-02' />
</reference>

<reference anchor='DOS'>
<front>
<title>Internet Denial-of-Service Considerations</title>
<author initials='M.' surname='Handley' fullname='M. Handley'>
<organization /></author>
<author initials='E.' surname='Rescorla' fullname='E. Rescorla'>
<organization /></author>
<author>
<organization>IAB</organization></author>
<date year='2006' month='December' />
<abstract>
<t>This document provides an overview of possible avenues for denial-of-service (DoS) attack on Internet systems.  The aim is to encourage protocol designers and network engineers towards designs that are more robust.  We discuss partial solutions that reduce the effectiveness of attacks, and how some solutions might inadvertently open up alternative vulnerabilities.  This memo provides information for the Internet community.</t></abstract></front>
<seriesInfo name='RFC' value='4732' />
<format type='TXT' octets='91844' target='ftp://ftp.isi.edu/in-notes/rfc4732.txt' />
</reference>

<reference anchor='SECGUIDE'>
<front>
<title>Guidelines for Writing RFC Text on Security Considerations</title>
<author initials='E.' surname='Rescorla' fullname='E. Rescorla'>
<organization /></author>
<author initials='B.' surname='Korver' fullname='B. Korver'>
<organization /></author>
<date year='2003' month='July' />
<abstract>
<t>All RFCs are required to have a Security Considerations section.  Historically, such sections have been relatively weak.  This document provides guidelines to RFC authors on how to write a good Security Considerations section.  This document specifies an Internet Best Current Practices for the Internet Community, and requests discussion and suggestions for improvements.</t></abstract></front>

<seriesInfo name='BCP' value='72' />
<seriesInfo name='RFC' value='3552' />
<format type='TXT' octets='110393' target='ftp://ftp.isi.edu/in-notes/rfc3552.txt' />
</reference>

<reference anchor="range-coding">
<front>
<title>Range encoding: An algorithm for removing redundancy from a digitised message</title>
<author initials="G." surname="Nigel" fullname=""><organization/></author>
<author initials="N." surname="Martin" fullname=""><organization/></author>
<date year="1979" />
</front>
<seriesInfo name="Proc. Institution of Electronic and Radio Engineers International Conference on Video and Data Recording" value="" />
</reference>

<reference anchor="coding-thesis">
<front>
<title>Source coding algorithms for fast data compression</title>
<author initials="R." surname="Pasco" fullname=""><organization/></author>
<date month="May" year="1976" />
</front>
<seriesInfo name="Ph.D. thesis" value="Dept. of Electrical Engineering, Stanford University" />
</reference>

<reference anchor="PVQ">
<front>
<title>A Pyramid Vector Quantizer</title>
<author initials="T." surname="Fischer" fullname=""><organization/></author>
<date month="July" year="1986" />
</front>
<seriesInfo name="IEEE Trans. on Information Theory, Vol. 32" value="pp. 568-583" />
</reference>

</references>

<section anchor="ref-implementation" title="Reference Implementation">

<t>This appendix contains the complete source code for the
reference implementation of the Opus codec written in C. This
implementation can be compiled for
either floating-point or fixed-point architectures.
</t>

<t>The implementation can be compiled with either a C89 or a C99
compiler. It is reasonably optimized for most platforms such that
only architecture-specific optimizations are likely to be useful.
The FFT used is a slightly modified version of the KISS-FFT package,
but it is easy to substitute any other FFT library.
</t>

<section title="Extracting the source">
<t>
The complete source code can be extracted from this draft, by running the
following command line:

<list style="symbols">
<t><![CDATA[
cat draft-ietf-codec-opus.txt | grep '^\ \ \ ###' | sed 's/\s\s\s###//' | base64 -d > opus_source.tar.gz
]]></t>
<t>
tar xzvf opus_source.tar.gz
</t>
<t>cd opus_source</t>
<t>make</t>
</list>

</t>
</section>

<section title="Development Versions">
<t>
The current development version of the source code is available in a
 <eref target='git://git.opus-codec.org/opus.git'>Git repository</eref>.
Development snapshots are provided at
 <eref target='http://opus-codec.org/'/>.
</t>
</section>

<section title="Base64-encoded source code">
<t>
<?rfc include="opus_source.base64"?>
</t>
</section>

</section>

<section anchor="opus-compare" title="opus_compare.c">
<t>
<?rfc include="opus_compare_escaped.c"?>
</t>
</section>

</back>

</rfc>