# WebphoneLib **Repository Path**: magixyu/WebphoneLib ## Basic Information - **Project Name**: WebphoneLib - **Description**: No description available - **Primary Language**: Unknown - **License**: MIT - **Default Branch**: master - **Homepage**: None - **GVP Project**: No ## Statistics - **Stars**: 0 - **Forks**: 0 - **Created**: 2020-12-16 - **Last Updated**: 2020-12-19 ## Categories & Tags **Categories**: Uncategorized **Tags**: None ## README # Open VoIP Alliance Webphone Lib ![npm](https://img.shields.io/npm/v/webphone-lib?style=flat-square) Makes calling easier by providing a layer of abstraction around SIP.js. To figure out why we made this, read [our blog post](https://wearespindle.com/articles/how-to-abstract-the-complications-of-sip-js-away-with-our-library/). ## Documentation Check out the documentation [here](https://open-voip-alliance.github.io/WebphoneLib/). ## Cool stuff - Allows you to switch audio devices mid-call. - Automatically recovers calls on connectivity loss. - Offers an easy-to-use modern javascript api. ## Join us! We would love more input for this project. Create an issue, create a pull request for an issue, or if you're not really sure, ask us. We're often hanging around on [discourse](https://discourse.openvoipalliance.org/). We would also love to hear your thoughts and feedback on our project and answer any questions you might have! ## Getting started ```bash $ git clone git@github.com:open-voip-alliance/WebphoneLib.git $ cd WebphoneLib $ touch demo/config.mjs ``` Add the following to `demo/config.mjs` ```javascript export const authorizationUserId = ; export const password = ''; export const realm = ''; export const websocketUrl = ''; ``` Run the demo-server: ```bash $ npm i && npm run demo ``` And then play around at http://localhost:1235/demo/. ## Examples ### Connecting and registering ```javascript import { Client } from 'webphone-lib'; const account = { user: 'accountId', password: 'password', uri: 'sip:accountId@', name: 'test' }; const transport = { wsServers: '', // or replace with your iceServers: [] // depending on if your provider needs STUN/TURN. }; const media = { input: { id: undefined, // default audio device audioProcessing: true, volume: 1.0, muted: false }, output: { id: undefined, // default audio device volume: 1.0, muted: false } }; const client = new Client({ account, transport, media }); await client.register(); ``` ### Incoming call ```javascript // incoming call below client.on('invite', (session) => { try { ringer(); let { accepted, rejectCause } = await session.accepted(); // wait until the call is picked up if (!accepted) { return; } showCallScreen(); await session.terminated(); } catch (e) { showErrorMessage(e) } finally { closeCallScreen(); } }); ``` ### Outgoing call ```javascript const session = client.invite('sip:518@'); try { showOutgoingCallInProgress(); let { accepted, rejectCause } = await session.accepted(); // wait until the call is picked up if (!accepted) { showRejectedScreen(); return; } showCallScreen(); await session.terminated(); } catch (e) { } finally { closeCallScreen(); } ``` ## Attended transfer of a call ```javascript if (await sessionA.accepted()) { await sessionA.hold(); const sessionB = client.invite('sip:519@'); if (await sessionB.accepted()) { // immediately transfer after the other party picked up :p await client.attendedTransfer(sessionA, sessionB); await sessionB.terminated(); } } ``` ## Audio device selection #### Set a primary input & output device: ```javascript const client = new Client({ account, transport, media: { input: { id: undefined, // default input device audioProcessing: true, volume: 1.0, muted: false }, output: { id: undefined, // default output device volume: 1.0, muted: false } } }); ``` #### Change the primary I/O devices: ```javascript client.defaultMedia.output.id = '230988012091820398213'; ``` #### Change the media of a session: ```javascript const session = await client.invite('123'); session.media.input.volume = 50; session.media.input.audioProcessing = false; session.media.input.muted = true; session.media.output.muted = false; session.media.setInput({ id: '120398120398123', audioProcessing: true, volume: 0.5, muted: true }); ``` ## Commands | Command | Help | | ------------------------- | ------------------------------------------------------------------------------- | | npm run docs | Generate the docs | | npm run test | Run the tests | | npm run test -- --verbose | Show output of `console.log` during tests | | npm run test-watch | Watch the tests as you make changes | | npm run build | Build the projects | | npm run prepare | Prepare the project for publish, this is automatically run before `npm publish` | | npm run lint | Run `tslint` over the source files | | npm run typecheck | Verifies type constraints are met | ## Generate documentation [Typedoc](https://typedoc.org/guides/doccomments/) is used to generate the documentation from the `jsdoc` comments in the source code. See [this link](https://typedoc.org/guides/doccomments/) for more information on which `jsdoc` tags are supported. ## Run puppeteer tests ### Using docker Add a .env file with the following: ``` USER_A = USER_B = PASSWORD_A = PASSWORD_B = NUMBER_A = NUMBER_B = WEBSOCKET_URL = REALM = ``` Then call `docker-compose up` to run the tests. Note: Don't forget to call `npm ci` in the puppeteer folder. :) ### Without docker If you don't want to use docker, you will need to run the demo with the `npm run demo` command (and keep it running) and run the tests with `npm run test:e2e`. For this you will need the .env file with your settings.