MESSAGE
DATE | 2017-02-11 |
FROM | Ruben Safir
|
SUBJECT | Subject: [Hangout-NYLXS] WebRTC coding in html5
|
https://www.html5rocks.com/en/tutorials/webrtc/infrastructure/
WebRTC enables peer to peer communication.
BUT...
WebRTC still needs servers:
For clients to exchange metadata to coordinate communication:
this is called signaling.
To cope with network address translators (NATs) and firewalls.
In this article we show you how to build a signaling service, and
how to deal with the quirks of real-world connectivity by using STUN and
TURN servers. We also explain how WebRTC apps can handle multi-party
calls and interact with services such as VoIP and PSTN (aka telephones).
If you're not familiar with the basics of WebRTC, we strongly
recommend you take a look at Getting Started With WebRTC before reading
this article.
What is signaling?
Signaling is the process of coordinating communication. In order for a
WebRTC application to set up a 'call', its clients need to exchange
information:
Session control messages used to open or close communication.
Error messages.
Media metadata such as codecs and codec settings, bandwidth and
media types.
Key data, used to establish secure connections.
Network data, such as a host's IP address and port as seen by the
outside world.
This signaling process needs a way for clients to pass messages back and
forth. That mechanism is not implemented by the WebRTC APIs: you need to
build it yourself. We describe below some ways to build a signaling
service. First, however, a little context...
Why is signaling not defined by WebRTC?
To avoid redundancy and to maximize compatibility with established
technologies, signaling methods and protocols are not specified by
WebRTC standards. This approach is outlined by JSEP, the JavaScript
Session Establishment Protocol:
The thinking behind WebRTC call setup has been to fully specify and
control the media plane, but to leave the signaling plane up to the
application as much as possible. The rationale is that different
applications may prefer to use different protocols, such as the existing
SIP or Jingle call signaling protocols, or something custom to the
particular application, perhaps for a novel use case. In this approach,
the key information that needs to be exchanged is the multimedia session
description, which specifies the necessary transport and media
configuration information necessary to establish the media plane.
JSEP's architecture also avoids a browser having to save state: that is,
to function as a signaling state machine. This would be problematic if,
for example, signaling data was lost each time a page was reloaded.
Instead, signaling state can be saved on a server.
JSEP architecture diagram
JSEP architecture
JSEP requires the exchange between peers of offer and answer: the media
metadata mentioned above. Offers and answers are communicated in Session
Description Protocol format (SDP), which look like this:
v=0
o=- 7614219274584779017 2 IN IP4 127.0.0.1
s=-
t=0 0
a=group:BUNDLE audio video
a=msid-semantic: WMS
m=audio 1 RTP/SAVPF 111 103 104 0 8 107 106 105 13 126
c=IN IP4 0.0.0.0
a=rtcp:1 IN IP4 0.0.0.0
a=ice-ufrag:W2TGCZw2NZHuwlnf
a=ice-pwd:xdQEccP40E+P0L5qTyzDgfmW
a=extmap:1 urn:ietf:params:rtp-hdrext:ssrc-audio-level
a=mid:audio
a=rtcp-mux
a=crypto:1 AES_CM_128_HMAC_SHA1_80
inline:9c1AHz27dZ9xPI91YNfSlI67/EMkjHHIHORiClQe
a=rtpmap:111 opus/48000/2
…
Want to know what all this SDP gobbledygook actually means? Take a look
at the IETF examples.
Bear in mind that WebRTC is designed so that the offer or answer can be
tweaked before being set as the local or remote description, by editing
the values in the SDP text. For example, the preferAudioCodec() function
in apprtc.appspot.com can be used to set the default codec and bitrate.
SDP is somewhat painful to manipulate with JavaScript, and there is
discussion about whether future versions of WebRTC should use JSON
instead, but there are some advantages to sticking with SDP.
RTCPeerConnection + signaling: offer, answer and candidate
RTCPeerConnection is the API used by WebRTC applications to create a
connection between peers and communicate audio and video.
To initialise this process RTCPeerConnection has two tasks:
Ascertain local media conditions, such as resolution and codec
capabilities. This is the metadata used for the offer and answer mechanism.
Get potential network addresses for the application's host, known as
candidates.
Once this local data has been ascertained, it must be exchanged via a
signaling mechanism with the remote peer.
Imagine Alice is trying to call Eve. Here's the full offer/answer
mechanism in all its gory detail:
Alice creates an RTCPeerConnection object.
Alice creates an offer (an SDP session description) with the
RTCPeerConnection createOffer() method.
Alice calls setLocalDescription() with his offer.
Alice stringifies the offer and uses a signaling mechanism to send
it to Eve.
Eve calls setRemoteDescription() with Alice's offer, so that her
RTCPeerConnection knows about Alice's setup.
Eve calls createAnswer(), and the success callback for this is
passed a local session description: Eve's answer.
Eve sets her answer as the local description by calling
setLocalDescription().
Eve then uses the signaling mechanism to send her stringified answer
back to Alice.
Alice sets Eve's answer as the remote session description using
setRemoteDescription().
--
So many immigrant groups have swept through our town
that Brooklyn, like Atlantis, reaches mythological
proportions in the mind of the world - RI Safir 1998
http://www.mrbrklyn.com
DRM is THEFT - We are the STAKEHOLDERS - RI Safir 2002
http://www.nylxs.com - Leadership Development in Free Software
http://www2.mrbrklyn.com/resources - Unpublished Archive
http://www.coinhangout.com - coins!
http://www.brooklyn-living.com
Being so tracked is for FARM ANIMALS and and extermination camps,
but incompatible with living as a free human being. -RI Safir 2013
_______________________________________________
hangout mailing list
hangout-at-nylxs.com
http://www.nylxs.com/
|
|