본문 바로가기
프로그래밍/WebRTC

[WebRTC] Real time communication with WebRTC 2

by YuminK 2022. 6. 8.

https://codelabs.developers.google.com/codelabs/webrtc-web

 

5. Stream video with RTCPeerConnection

In this step you'll find out how to:

  • Abstract away browser differences with the WebRTC shim, adapter.js
  • Use the RTCPeerConnection API to stream video. 
  • Control media capture and streaming.

 WebRTC Shim과 adapter.js를 사용하여 브라우저간 차이를 추상화 하는 방법

비디오를 스트림하기 위한 RTCPeerConecttion API를 사용하는 방법

미디어 캡처와 스트리밍 관리 

 

RTCPeerConnection이란?

WebRTC API로서 비디오와 오디오 혹은 데이터를 교환하기 위해 사용된다. 

 

Add video elements and control buttons

In index.html replace the single video element with two video elements and three buttons:

<video id="localVideo" autoplay playsinline></video>
<video id="remoteVideo" autoplay playsinline></video>


<div>
  <button id="startButton">Start</button>
  <button id="callButton">Call</button>
  <button id="hangupButton">Hang Up</button>
</div>

One video element will display the stream from getUserMedia()and the other will show the same video streamed via RTCPeerconnection. (In a real world application, one video element would display the local stream and the other the remote stream.)

Add the adapter.js shim

Add a link to the current version of adapter.js above the link to main.js:

<script src="https://webrtc.github.io/adapter/adapter-latest.js"></script>

adapter.js is a shim to insulate apps from spec changes and prefix differences. (Though in fact, the standards and protocols used for WebRTC implementations are highly stable, and there are only a few prefixed names.)

In this step, we've linked to the most recent version of adapter.js, which is fine for a codelab but not may not be right for a production app**.** The adapter.js GitHub repo explains techniques for making sure your app always accesses the most recent version.

For full information about WebRTC interop, see WebRTC's Interop Notes.

Index.html should now look like this:

<!DOCTYPE html>
<html>

<head>
  <title>Realtime communication with WebRTC</title>
  <link rel="stylesheet" href="css/main.css" />
</head>

<body>
  <h1>Realtime communication with WebRTC</h1>

  <video id="localVideo" autoplay playsinline></video>
  <video id="remoteVideo" autoplay playsinline></video>

  <div>
    <button id="startButton">Start</button>
    <button id="callButton">Call</button>
    <button id="hangupButton">Hang Up</button>
  </div>

  <script src="https://webrtc.github.io/adapter/adapter-latest.js"></script>
  <script src="js/main.js"></script>
</body>
</html>

Install the RTCPeerConnection code

Replace main.js with the version in the step-02 folder.

It's not ideal doing cut-and-paste with large chunks of code in a codelab, but in order to get RTCPeerConnection up and running, there's no alternative but to go the whole hog.

You'll learn how the code works in a moment.

Make the call

Open index.html, click the Start button to get video from your webcam, and click Call to make the peer connection. You should see the same video (from your webcam) in both video elements. View the browser console to see WebRTC logging.

 

동작 원리

WebRTC는 WebRTC Clients간 비디오 스트림 처리를 위해 RTCPeerConnection API를 사용합니다. 

 

RTC peer 사이에서 Call 설정은 3가지 과정을 거칩니다.

  • Create a RTCPeerConnection for each end of the call and, at each end, add the local stream from getUserMedia().
  • Get and share network information: potential connection endpoints are known as ICE candidates.
  • Get and share local and remote descriptions: metadata about local media in SDP format.

각 call의 끝에 RTCPeerConnection을 생성합니다. 각 끝에서 getUserMedia에서 얻은 local stream을 추가합니다.

네트워크 정보를 얻고 공유: 잠재적 연결 엔드포인트는 ICE 후보라고 알려져 있습니다. (ICE, Interactive Connectivity Establishment 양방향 연결 설립)

로컬, 원격 descriptions를 얻고 공유: SDP format으로 로컬 미디어에 대한 메타데이터

 

ICE(위키백과)

Interactive Connectivity Establishment (ICE) is a technique used in computer networking to find ways for two computers to talk to each other as directly as possible in peer-to-peer networking. This is most commonly used for interactive media such as Voice over Internet Protocol (VoIP), peer-to-peer communications, video, and instant messaging. In such applications, communicating through a central server would be slow and expensive, but direct communication between client applications on the Internet is very tricky due to network address translators (NATs), firewalls, and other network barriers.

 

SDP(위키백과)

The Session Description Protocol (SDP) is a format for describing multimedia communication sessions for the purposes of announcement and invitation.[1] Its predominant use is in support of streaming media applications, such as voice over IP (VoIP) and video conferencing. SDP does not deliver any media streams itself but is used between endpoints for negotiation of network metrics, media types, and other associated properties. The set of properties and parameters is called a session profile.

 

앨리스와 밥이 RTCPeerConnection을 사용하여 비디오 챗을 셋팅하고 싶은 상황을 그려보세요.

처음에는 네트워크 정보엥 대한 교환을 합니다. "finding candidates"라는 표현은 ICE 프레임워크를 사용하여 네트워크 인터페이스와 포트를 찾는 과정을 조회합니다. 

 

  1. Alice creates an RTCPeerConnection object with an onicecandidate (addEventListener('icecandidate')) handler. This corresponds to the following code from main.js:

앨리스는 RTCPeerConnection 오브젝트를 생성합니다. onicecandidate (addEventListener('icecandidate')) handler. 

main.js에 있는 다음 코드와 일치합니다.

let localPeerConnection;
localPeerConnection = new RTCPeerConnection(servers);
localPeerConnection.addEventListener('icecandidate', handleConnection);
localPeerConnection.addEventListener(
    'iceconnectionstatechange', handleConnectionChange);
The servers argument to RTCPeerConnection isn't used in this example.
This is where you could specify STUN and TURN servers.
WebRTC is designed to work peer-to-peer, so users can connect by the most direct route possible. However, WebRTC is built to cope with real-world networking: client applications need to traverse NAT gateways and firewalls, and peer to peer networking needs fallbacks in case direct connection fails.
As part of this process, the WebRTC APIs use STUN servers to get the IP address of your computer, and TURN servers to function as relay servers in case peer-to-peer communication fails. WebRTC in the real world explains in more detail.

예제에서 RTCPeerConnection에 서버 arguments는 사용되지 않습니다. 

이곳은 STUN, TURN서버를 지정할 수 있습니다....

 

2. 앨리스가 getUserMedia()를 콜하고 넘긴 stream을 추가합니다.

navigator.mediaDevices.getUserMedia(mediaStreamConstraints).
  then(gotLocalMediaStream).
  catch(handleLocalMediaStreamError);
function gotLocalMediaStream(mediaStream) {
  localVideo.srcObject = mediaStream;
  localStream = mediaStream;
  trace('Received local stream.');
  callButton.disabled = false;  // Enable call button.
}
localPeerConnection.addStream(localStream);
trace('Added local stream to localPeerConnection.');

3. onicecandidate 핸들러는 네트워크 후보자가 유효할 때 불립니다.

4. 앨리스는 밥에게 직렬화된 candidate 데이터를 보냅니다. 실제 프로세스인 시그널링에서는 메시지 서비스를 통해 진행이 됩니다. (나중 스텝에서 배우게 됩니다. 이 스텝에서는 같은 페이지에 있는 두 RTCPeerConnection 객체가 아무런 외부 메시지를 필요로 하지 않고 통신할 수 있습니다.) 

5. 밥이 앨리스로부터 candidate 메시지를 얻으면, addIceCandidate()를 호출하여 remote peer description에 candidate를 추가합니다. 

function handleConnection(event) {
  const peerConnection = event.target;
  const iceCandidate = event.candidate;

  if (iceCandidate) {
    const newIceCandidate = new RTCIceCandidate(iceCandidate);
    const otherPeer = getOtherPeer(peerConnection);

    otherPeer.addIceCandidate(newIceCandidate)
      .then(() => {
        handleConnectionSuccess(peerConnection);
      }).catch((error) => {
        handleConnectionFailure(peerConnection, error);
      });

    trace(`${getPeerName(peerConnection)} ICE candidate:\n` +
          `${event.candidate.candidate}.`);
  }
}

webRTC peer는 로컬과 원격 오디오, 비디오 정보를 교환해야 합니다. 해상도와 코덱 capabilities 같은. 

미디어 구성 정보를 교환하기 위한 시그널링은 offer와 answer이라 알려진 메타데이터 블롭을 교환하면서 진행됩니다. (Session Description Protocol format, SDP를 사용하여)

 

1. 앨리스는 RTCPeerConnection의 createOffer() 메소드를 실행합니다. 반환된 정보는 RTCSessionDescription정보를 제공합니다. (앨리스의 로컬 세션 description)

trace('localPeerConnection createOffer start.');
localPeerConnection.createOffer(offerOptions)
  .then(createdOffer).catch(setSessionDescriptionError);

2. 만약 성공하면, 앨리스는 setLocalDescription()을 통해 로컬 description을 설정합니다. 그리고 이 세션 description을 밥에게 보냅니다. (시그널링 채널을 사용하여)

3. 밥은 앨리스가 보낸 description정보를 remote description으로 설정합니다. using setRemoteDescription()

4. 밥은 RTCPeerConnection()의 createAnswer() 메소드를 호출하여 앨리스에게 받은 description정보를 전달하여 앨리스와 호환되는 로컬 세션을 생성할 수 있습니다. RTCSessionDescription: local description을 설정하고 그것을 앨리스에게 보냅니다. 

5. 앨리스는 밥의 session description을 받아서 remote description정보로 설정합니다. (setRemoteDescription()

// Logs offer creation and sets peer connection session descriptions.
function createdOffer(description) {
  trace(`Offer from localPeerConnection:\n${description.sdp}`);

  trace('localPeerConnection setLocalDescription start.');
  localPeerConnection.setLocalDescription(description)
    .then(() => {
      setLocalDescriptionSuccess(localPeerConnection);
    }).catch(setSessionDescriptionError);

  trace('remotePeerConnection setRemoteDescription start.');
  remotePeerConnection.setRemoteDescription(description)
    .then(() => {
      setRemoteDescriptionSuccess(remotePeerConnection);
    }).catch(setSessionDescriptionError);

  trace('remotePeerConnection createAnswer start.');
  remotePeerConnection.createAnswer()
    .then(createdAnswer)
    .catch(setSessionDescriptionError);
}

// Logs answer to offer creation and sets peer connection session descriptions.
function createdAnswer(description) {
  trace(`Answer from remotePeerConnection:\n${description.sdp}.`);

  trace('remotePeerConnection setLocalDescription start.');
  remotePeerConnection.setLocalDescription(description)
    .then(() => {
      setLocalDescriptionSuccess(remotePeerConnection);
    }).catch(setSessionDescriptionError);

  trace('localPeerConnection setRemoteDescription start.');
  localPeerConnection.setRemoteDescription(description)
    .then(() => {
      setRemoteDescriptionSuccess(localPeerConnection);
    }).catch(setSessionDescriptionError);
}

앨리스가 자신의 Local 정보 설정하고 밥이 받아서 리모트로 설정하고 createAnswer() 함수를 호출해주면 앨리스와 호환되는 세션이 열린다. 이후 createdAnswer 함수가 호출이 된다. 여기서 밥의 로컬 정보를 설정하고 이 정보를 앨리스에게 전달하여 리모트로 설정하도록 한다.

 

Bonus points

  1. Take a look at chrome://webrtc-internals. This provides WebRTC stats and debugging data. (A full list of Chrome URLs is at chrome://about.)
  2. Style the page with CSS:
  • Put the videos side by side.
  • Make the buttons the same width, with bigger text.
  • Make sure the layout works on mobile.
  1. From the Chrome Dev Tools console, look at localStream, localPeerConnection and remotePeerConnection.
  2. From the console, look at localPeerConnectionpc1.localDescription. What does SDP format look like?

What you learned

In this step you learned how to:

  • Abstract away browser differences with the WebRTC shim, adapter.js.
  • Use the RTCPeerConnection API to stream video.
  • Control media capture and streaming.
  • Share media and network information between peers to enable a WebRTC call.

A complete version of this step is in the step-2 folder.

Tips

  • There's a lot to learn in this step! To find other resources that explain RTCPeerConnection in more detail, take a look at webrtc.org. This page includes suggestions for JavaScript frameworks — if you'd like to use WebRTC, but don't want to wrangle the APIs.
  • Find out more about the adapter.js shim from the adapter.js GitHub repo.
  • Want to see what the world's best video chat app looks like? Take a look at AppRTC, the WebRTC project's canonical app for WebRTC calls: app, code. Call setup time is less than 500 ms.

Best practice

  • To future-proof your code, use the new Promise-based APIs and enable compatibility with browsers that don't support them by using adapter.js.

Next up

This step shows how to use WebRTC to stream video between peers — but this codelab is also about data!

In the next step find out how to stream arbitrary data using RTCDataChannel.

 

이 스텝은 피어간 비디오 스트림을 위한 WebRTC 사용 방법을 보여줍니다. 다음 스텝에서는 임의 데이터를 RTCDataChannel을 이용하여 스트림 하는 방법을 알아봅니다. 

댓글