MediaStream audio: Refactor 3 separate "glue" implementations into one.

This is a refactoring of the MediaStreamAudioSource/Track object graph
such that life-cycle and audio flow control are unified into a single
architecture.  Previously, each of three implementations solved this
same problem in three different ways; and this has made it difficult to
maintain the code in src/content/renderer/media across all product
features, WebRTC or otherwise.

Diagram of post-refactoring class relationships: https://docs.google.com/drawings/d/1yTsXvRMIyMlXjIEeQOVqXPxMPWgekB5ePWUVSI5-zwo/edit?usp=sharing

The new architeture is as follows:

1. MediaStreamAudioSource becomes a base class implementation for
creating MediaStreamAudioTracks and delivering audio to them from the
source (i.e., an AudioInputDevice, a PeerConnection remote source, or a
WebAudio source).  All of its methods are now assumed to be run on the
main thread, while audio flow may optionally occur on a separate
thread.

2. MediaStreamAudioTrack becomes a base class implementation for
connecting/disconnecting MediaStreamAudioSinks, and delivering audio to
them from the source.

3. Both MediaStreamAudioSource and MediaStreamAudioTrack are owned by
their blink counterparts (WebMediaStreamSource and WebMediaStreamTrack),
and their destruction may safely occur any time the blink implementation
requires it.

Following the new architeture, the refactoring is:

1. WebRtcAudioCapturer becomes a sub-class of MediaStreamAudioSource and
is renamed to ProcessedLocalAudioSource.  This new class owns and
manages the WebRTC-specific audio processing on the source data.  Also:
a) Significant implementation from PeerConnectionDependencyFactory was
consolidated/moved to this class.  b) The "EnablePeerConnectionMode"
functionality, which re-started the audio input with a different buffer
size, has been removed to simplify the implementation.  Now, buffer size
is determined BEFORE the first time the audio input device is started.
c) Currently, all local audio sources (i.e., via AudioInputDevice) are
routed through this pipeline, but a soon-upcoming change will split the
WebRTC-specific cases from the ones that should not go through audio
processing.

2. MediaStreamRemoteAudioTrack becomes a sub-class of
MediaStreamAudioSource and is renamed to
PeerConnectionRemoteAudioSource.  This new class owns and manages the
flow of audio data from a PeerConnection into the MediaStream
framework.

3. WebAudioCapturer becomes a sub-class of MediaStreamAudioSource and is
renamed to WebAudioMediaStreamSource.  As a
blink::WebAudioDestinationConsumer, it manages the flow of audio from a
WebAudio destination node into the MediaStream framework.

BUG=577881,577874

Review-Url: https://codereview.chromium.org/1834323002
Cr-Commit-Position: refs/heads/master@{#392845}
75 files changed