<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>estimation | DAILY ZSOCIAL MEDIA NEWS</title>
	<atom:link href="https://dailyzsocialmedianews.com/tag/estimation/feed/" rel="self" type="application/rss+xml" />
	<link>https://dailyzsocialmedianews.com</link>
	<description>ALL ABOUT DAILY ZSOCIAL MEDIA NEWS</description>
	<lastBuildDate>Thu, 21 Mar 2024 01:34:52 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.7.1</generator>

 
	<item>
		<title>Optimizing RTC bandwidth estimation with machine studying</title>
		<link>https://dailyzsocialmedianews.com/optimizing-rtc-bandwidth-estimation-with-machine-studying/</link>
		
		<dc:creator><![CDATA[]]></dc:creator>
		<pubDate>Thu, 21 Mar 2024 01:34:51 +0000</pubDate>
				<category><![CDATA[Facebook]]></category>
		<category><![CDATA[bandwidth]]></category>
		<category><![CDATA[estimation]]></category>
		<category><![CDATA[learning]]></category>
		<category><![CDATA[Machine]]></category>
		<category><![CDATA[Optimizing]]></category>
		<category><![CDATA[RTC]]></category>
		<guid isPermaLink="false">https://dailyzsocialmedianews.com/?p=24934</guid>

					<description><![CDATA[<div style="margin-bottom:20px;"><img width="1023" height="576" src="https://social-media-news.s3.amazonaws.com/wp-content/uploads/2024/03/21013450/Optimizing-RTC-bandwidth-estimation-with-machine-learning.png" class="attachment-post-thumbnail size-post-thumbnail wp-post-image" alt="Optimizing RTC bandwidth estimation with machine learning" decoding="async" fetchpriority="high" srcset="https://social-media-news.s3.amazonaws.com/wp-content/uploads/2024/03/21013450/Optimizing-RTC-bandwidth-estimation-with-machine-learning.png 1023w, https://social-media-news.s3.amazonaws.com/wp-content/uploads/2024/03/21013450/Optimizing-RTC-bandwidth-estimation-with-machine-learning-300x169.png 300w, https://social-media-news.s3.amazonaws.com/wp-content/uploads/2024/03/21013450/Optimizing-RTC-bandwidth-estimation-with-machine-learning-768x432.png 768w" sizes="(max-width: 1023px) 100vw, 1023px" /></div><p>Bandwidth estimation (BWE) and congestion control play an important role in delivering high-quality real-time communication (RTC) across Meta’s family of apps. We’ve adopted a machine learning (ML)-based approach that allows us to solve networking problems holistically across cross-layers such as BWE, network resiliency, and transport. We’re sharing our experiment results from this approach, some of [&#8230;]</p>
The post <a href="https://dailyzsocialmedianews.com/optimizing-rtc-bandwidth-estimation-with-machine-studying/">Optimizing RTC bandwidth estimation with machine studying</a> first appeared on <a href="https://dailyzsocialmedianews.com">DAILY ZSOCIAL MEDIA NEWS</a>.]]></description>
										<content:encoded><![CDATA[<div style="margin-bottom:20px;"><img width="1023" height="576" src="https://social-media-news.s3.amazonaws.com/wp-content/uploads/2024/03/21013450/Optimizing-RTC-bandwidth-estimation-with-machine-learning.png" class="attachment-post-thumbnail size-post-thumbnail wp-post-image" alt="Optimizing RTC bandwidth estimation with machine learning" decoding="async" srcset="https://social-media-news.s3.amazonaws.com/wp-content/uploads/2024/03/21013450/Optimizing-RTC-bandwidth-estimation-with-machine-learning.png 1023w, https://social-media-news.s3.amazonaws.com/wp-content/uploads/2024/03/21013450/Optimizing-RTC-bandwidth-estimation-with-machine-learning-300x169.png 300w, https://social-media-news.s3.amazonaws.com/wp-content/uploads/2024/03/21013450/Optimizing-RTC-bandwidth-estimation-with-machine-learning-768x432.png 768w" sizes="(max-width: 1023px) 100vw, 1023px" /></div><p></p>
<ul>
<li style="font-weight: 400;" aria-level="1"><span style="font-weight: 400;">Bandwidth estimation (BWE) and congestion control play an important role in delivering high-quality real-time communication (RTC) across Meta’s family of apps.</span></li>
<li style="font-weight: 400;" aria-level="1"><span style="font-weight: 400;">We’ve adopted a machine learning (ML)-based approach that allows us</span><span style="font-weight: 400;"> to solve networking problems holistically across cross-layers such as BWE, network resiliency, and transport.</span></li>
<li style="font-weight: 400;" aria-level="1"><span style="font-weight: 400;">We’re sharing our experiment results from this approach, some of the challenges we encountered during execution, and learnings for new adopters.</span></li>
</ul>
<p><span style="font-weight: 400;">Our existing bandwidth estimation (BWE) module at Meta is</span> <span style="font-weight: 400;">based on WebRTC’s Google Congestion Controller (GCC)</span><span style="font-weight: 400;">. We have made several improvements through parameter tuning, but this has resulted in a more complex system, as shown in Figure 1.</span></p>
<p>Figure 1: BWE module’s system diagram for congestion control in RTC.</p>
<p><span style="font-weight: 400;">One challenge with the tuned congestion control (CC)/BWE algorithm was that it had multiple parameters and actions that were dependent on network conditions. For example, there was a trade-off between quality and reliability; improving quality for high-bandwidth users often led to reliability regressions for low-bandwidth users, and vice versa, making it challenging to optimize the user experience for different network conditions.</span></p>
<p><span style="font-weight: 400;">Additionally, we noticed some inefficiencies in regards to improving and maintaining the module with the complex BWE module:</span></p>
<ol>
<li style="font-weight: 400;" aria-level="1"><span style="font-weight: 400;">Due to the absence of realistic network conditions during our experimentation process, fine-tuning the parameters for user clients necessitated several attempts.</span></li>
<li style="font-weight: 400;" aria-level="1"><span style="font-weight: 400;">Even after the rollout, it wasn’t clear if the optimized parameters were still applicable for the targeted network types.</span></li>
<li style="font-weight: 400;" aria-level="1"><span style="font-weight: 400;">This resulted in complex code logics and branches for engineers to maintain.</span></li>
</ol>
<p><span style="font-weight: 400;">To solve these inefficiencies, we developed a machine learning (ML)-based, network-targeting approach that offers a cleaner alternative to hand-tuned rules. This approach also allows us to solve networking problems holistically across cross-layers such as BWE, network resiliency, and transport.</span></p>
<h2><span style="font-weight: 400;">Network characterization</span></h2>
<p><span style="font-weight: 400;">An ML model-based approach leverages time series data to improve the bandwidth estimation by using offline parameter tuning for characterized network types. </span></p>
<p><span style="font-weight: 400;">For an RTC call to be completed, the endpoints must be connected to each other through network devices. The optimal configs that have been tuned offline are stored on the server and can be updated in real-time. During the call connection setup, these optimal configs are delivered to the client. During the call, media is transferred directly between the endpoints or through a relay server. Depending on the network signals collected during the call, an ML-based approach characterizes the network into different types and applies the optimal configs for the detected type.</span></p>
<p><span style="font-weight: 400;">Figure 2 illustrates an example of an RTC call that’s optimized using the ML-based approach. </span><span style="font-weight: 400;"> </span></p>
<p><img decoding="async" class="size-large wp-image-21120" src="https://engineering.fb.com/wp-content/uploads/2024/03/Optimizing-BWE-with-ML-Hero_Figure-2.png?w=1024" alt="" width="1024" height="576" srcset="https://engineering.fb.com/wp-content/uploads/2024/03/Optimizing-BWE-with-ML-Hero_Figure-2.png 1999w, https://engineering.fb.com/wp-content/uploads/2024/03/Optimizing-BWE-with-ML-Hero_Figure-2.png?resize=580,326 580w, https://engineering.fb.com/wp-content/uploads/2024/03/Optimizing-BWE-with-ML-Hero_Figure-2.png?resize=916,516 916w, https://engineering.fb.com/wp-content/uploads/2024/03/Optimizing-BWE-with-ML-Hero_Figure-2.png?resize=768,432 768w, https://engineering.fb.com/wp-content/uploads/2024/03/Optimizing-BWE-with-ML-Hero_Figure-2.png?resize=1024,576 1024w, https://engineering.fb.com/wp-content/uploads/2024/03/Optimizing-BWE-with-ML-Hero_Figure-2.png?resize=1536,864 1536w, https://engineering.fb.com/wp-content/uploads/2024/03/Optimizing-BWE-with-ML-Hero_Figure-2.png?resize=96,54 96w, https://engineering.fb.com/wp-content/uploads/2024/03/Optimizing-BWE-with-ML-Hero_Figure-2.png?resize=192,108 192w" sizes="(max-width: 992px) 100vw, 62vw"/>Figure 2: An example RTC call configuration with optimized parameters delivered from the server and based on the current network type.</p>
<h2><span style="font-weight: 400;">Model learning and offline parameter tuning</span></h2>
<p><span style="font-weight: 400;">On a high level, network characterization consists of two main components, as shown in Figure 3. The first component is offline ML model learning using ML to categorize the network type (random packet loss versus bursty loss). The second component uses offline simulations to tune parameters optimally for the categorized network type. </span></p>
<p><img loading="lazy" decoding="async" class="size-large wp-image-21121" src="https://engineering.fb.com/wp-content/uploads/2024/03/Optimizing-BWE-with-ML-Hero_Figure-3.png?w=1024" alt="" width="1024" height="576" srcset="https://engineering.fb.com/wp-content/uploads/2024/03/Optimizing-BWE-with-ML-Hero_Figure-3.png 1999w, https://engineering.fb.com/wp-content/uploads/2024/03/Optimizing-BWE-with-ML-Hero_Figure-3.png?resize=580,326 580w, https://engineering.fb.com/wp-content/uploads/2024/03/Optimizing-BWE-with-ML-Hero_Figure-3.png?resize=916,516 916w, https://engineering.fb.com/wp-content/uploads/2024/03/Optimizing-BWE-with-ML-Hero_Figure-3.png?resize=768,432 768w, https://engineering.fb.com/wp-content/uploads/2024/03/Optimizing-BWE-with-ML-Hero_Figure-3.png?resize=1024,576 1024w, https://engineering.fb.com/wp-content/uploads/2024/03/Optimizing-BWE-with-ML-Hero_Figure-3.png?resize=1536,864 1536w, https://engineering.fb.com/wp-content/uploads/2024/03/Optimizing-BWE-with-ML-Hero_Figure-3.png?resize=96,54 96w, https://engineering.fb.com/wp-content/uploads/2024/03/Optimizing-BWE-with-ML-Hero_Figure-3.png?resize=192,108 192w" sizes="auto, (max-width: 992px) 100vw, 62vw"/>Figure 3: Offline ML-model learning and parameter tuning.</p>
<p><span style="font-weight: 400;">For model learning, we leverage the time series data (network signals and non-personally identifiable information, see Figure 6, below) from production calls and simulations. Compared to the aggregate metrics logged after the call, time series captures the time-varying nature of the network and dynamics. We use</span><span style="font-weight: 400;"> FBLearner</span><span style="font-weight: 400;">, our internal AI stack, for the training pipeline and deliver the PyTorch model files on demand to the clients at the start of the call.</span></p>
<p><span style="font-weight: 400;">For offline tuning, we use simulations to run network profiles for the detected types and choose the optimal parameters for the modules based on improvements in technical metrics (such as quality, freeze, and so on.).</span></p>
<h2><span style="font-weight: 400;">Model architecture</span></h2>
<p><span style="font-weight: 400;">From our experience, we’ve found that it’s necessary to combine time series features with non-time series (i.e., derived metrics from the time window) for a highly accurate modeling.</span></p>
<p><span style="font-weight: 400;">To handle both time series and non-time series data, we’ve designed a model architecture that can process input from both sources.</span></p>
<p><span style="font-weight: 400;">The time series data will pass through a</span> <span style="font-weight: 400;">long short-term memory (LSTM) layer</span><span style="font-weight: 400;"> that will convert time series input into a one-dimensional vector representation, such as 16×1. The non-time series data or dense data will pass through a dense layer (i.e., a fully connected layer). Then the two vectors will be concatenated, to fully represent the network condition in the past, and passed through a fully connected layer again. The final output from the neural network model will be the predicted output of the target/task, as shown in Figure 4. </span></p>
<p><img loading="lazy" decoding="async" class="size-large wp-image-21122" src="https://engineering.fb.com/wp-content/uploads/2024/03/Optimizing-BWE-with-ML-Hero_Figure-4.png?w=1024" alt="" width="1024" height="576" srcset="https://engineering.fb.com/wp-content/uploads/2024/03/Optimizing-BWE-with-ML-Hero_Figure-4.png 1999w, https://engineering.fb.com/wp-content/uploads/2024/03/Optimizing-BWE-with-ML-Hero_Figure-4.png?resize=580,326 580w, https://engineering.fb.com/wp-content/uploads/2024/03/Optimizing-BWE-with-ML-Hero_Figure-4.png?resize=916,516 916w, https://engineering.fb.com/wp-content/uploads/2024/03/Optimizing-BWE-with-ML-Hero_Figure-4.png?resize=768,432 768w, https://engineering.fb.com/wp-content/uploads/2024/03/Optimizing-BWE-with-ML-Hero_Figure-4.png?resize=1024,576 1024w, https://engineering.fb.com/wp-content/uploads/2024/03/Optimizing-BWE-with-ML-Hero_Figure-4.png?resize=1536,864 1536w, https://engineering.fb.com/wp-content/uploads/2024/03/Optimizing-BWE-with-ML-Hero_Figure-4.png?resize=96,54 96w, https://engineering.fb.com/wp-content/uploads/2024/03/Optimizing-BWE-with-ML-Hero_Figure-4.png?resize=192,108 192w" sizes="auto, (max-width: 992px) 100vw, 62vw"/>Figure 4: Combined-model architecture with LSTM and Dense Layers</p>
<h2><span style="font-weight: 400;">Use case: Random packet loss classification</span></h2>
<p><span style="font-weight: 400;">Let’s consider the use case of categorizing packet loss as either random or congestion. The former loss is due to the network components, and the latter is due to the limits in queue length (which are delay dependent). Here is the ML task definition:</span><span style="font-weight: 400;"><br /></span><span style="font-weight: 400;"><br /></span><span style="font-weight: 400;">Given the network conditions in the past N seconds (10), and that the network is currently incurring packet loss, the goal is to characterize the packet loss at the current timestamp as RANDOM or not.</span></p>
<p><span style="font-weight: 400;">Figure 5 illustrates how we leverage the architecture to achieve that goal:</span></p>
<p><img loading="lazy" decoding="async" class="size-large wp-image-21123" src="https://engineering.fb.com/wp-content/uploads/2024/03/Optimizing-BWE-with-ML-Hero_Figure-5.png?w=1024" alt="" width="1024" height="576" srcset="https://engineering.fb.com/wp-content/uploads/2024/03/Optimizing-BWE-with-ML-Hero_Figure-5.png 1999w, https://engineering.fb.com/wp-content/uploads/2024/03/Optimizing-BWE-with-ML-Hero_Figure-5.png?resize=580,326 580w, https://engineering.fb.com/wp-content/uploads/2024/03/Optimizing-BWE-with-ML-Hero_Figure-5.png?resize=916,516 916w, https://engineering.fb.com/wp-content/uploads/2024/03/Optimizing-BWE-with-ML-Hero_Figure-5.png?resize=768,432 768w, https://engineering.fb.com/wp-content/uploads/2024/03/Optimizing-BWE-with-ML-Hero_Figure-5.png?resize=1024,576 1024w, https://engineering.fb.com/wp-content/uploads/2024/03/Optimizing-BWE-with-ML-Hero_Figure-5.png?resize=1536,864 1536w, https://engineering.fb.com/wp-content/uploads/2024/03/Optimizing-BWE-with-ML-Hero_Figure-5.png?resize=96,54 96w, https://engineering.fb.com/wp-content/uploads/2024/03/Optimizing-BWE-with-ML-Hero_Figure-5.png?resize=192,108 192w" sizes="auto, (max-width: 992px) 100vw, 62vw"/>Figure 5: Model architecture for a random packet loss classification task.</p>
<h3><span style="font-weight: 400;">Time series features</span></h3>
<p><span style="font-weight: 400;">We leverage the following time series features gathered from logs:</span></p>
<p><img loading="lazy" decoding="async" class="wp-image-21136 size-large" src="https://engineering.fb.com/wp-content/uploads/2024/03/Optimizing-BWE-with-ML-Hero_Figure-6b.png?w=1024" alt="" width="1024" height="576" srcset="https://engineering.fb.com/wp-content/uploads/2024/03/Optimizing-BWE-with-ML-Hero_Figure-6b.png 2500w, https://engineering.fb.com/wp-content/uploads/2024/03/Optimizing-BWE-with-ML-Hero_Figure-6b.png?resize=580,326 580w, https://engineering.fb.com/wp-content/uploads/2024/03/Optimizing-BWE-with-ML-Hero_Figure-6b.png?resize=916,515 916w, https://engineering.fb.com/wp-content/uploads/2024/03/Optimizing-BWE-with-ML-Hero_Figure-6b.png?resize=768,432 768w, https://engineering.fb.com/wp-content/uploads/2024/03/Optimizing-BWE-with-ML-Hero_Figure-6b.png?resize=1024,576 1024w, https://engineering.fb.com/wp-content/uploads/2024/03/Optimizing-BWE-with-ML-Hero_Figure-6b.png?resize=1536,864 1536w, https://engineering.fb.com/wp-content/uploads/2024/03/Optimizing-BWE-with-ML-Hero_Figure-6b.png?resize=2048,1152 2048w, https://engineering.fb.com/wp-content/uploads/2024/03/Optimizing-BWE-with-ML-Hero_Figure-6b.png?resize=96,54 96w, https://engineering.fb.com/wp-content/uploads/2024/03/Optimizing-BWE-with-ML-Hero_Figure-6b.png?resize=192,108 192w" sizes="auto, (max-width: 992px) 100vw, 62vw"/>Figure 6: Time series features used for model training.</p>
<h3><span style="font-weight: 400;">BWE optimization</span></h3>
<p><span style="font-weight: 400;">When the ML model detects random packet loss, we perform local optimization on the BWE module by:</span></p>
<ul>
<li style="font-weight: 400;" aria-level="1"><span style="font-weight: 400;">Increasing the tolerance to random packet loss in the loss-based BWE (holding the bitrate).</span></li>
<li style="font-weight: 400;" aria-level="1"><span style="font-weight: 400;">Increasing the ramp-up speed, depending on the link capacity on high bandwidths.</span></li>
<li style="font-weight: 400;" aria-level="1"><span style="font-weight: 400;">Increasing the network resiliency by sending additional forward-error correction packets to recover from packet loss.</span></li>
</ul>
<h2><span style="font-weight: 400;">Network prediction</span></h2>
<p><span style="font-weight: 400;">The network characterization problem discussed in the previous sections focuses on classifying network types based on past information using time series data. For those simple classification tasks, we achieve this using the hand-tuned rules with some limitations. The real power of leveraging ML for networking, however, comes from using it for predicting future network conditions.</span></p>
<p><span style="font-weight: 400;">We have applied ML for solving congestion-prediction problems for optimizing low-bandwidth users’ experience.</span></p>
<h2><span style="font-weight: 400;">Congestion prediction</span></h2>
<p><span style="font-weight: 400;">From our analysis of production data, we found that low-bandwidth users often incur congestion due to the behavior of the GCC module. By predicting this congestion, we can improve the reliability of such users’ behavior. Towards this, we addressed the following problem statement using round-trip time (RTT) and packet loss:</span><span style="font-weight: 400;"><br /></span><span style="font-weight: 400;"><br /></span><span style="font-weight: 400;">Given the historical time-series data from production/simulation (“N” seconds), the goal is to predict packet loss due to congestion or the congestion itself in the next “N” seconds; that is, a spike in RTT followed by a packet loss or a further growth in RTT.</span></p>
<p><span style="font-weight: 400;">Figure 7 shows an example from a simulation where the bandwidth alternates between 500 Kbps and 100 Kbps every 30 seconds. As we lower the bandwidth, the network incurs congestion and the ML model predictions fire the green spikes even before the delay spikes and packet loss occur. This early prediction of congestion is helpful in faster reactions and thus improves the user experience by preventing video freezes and connection drops.</span></p>
<p><img loading="lazy" decoding="async" class="size-large wp-image-21137" src="https://engineering.fb.com/wp-content/uploads/2024/03/Optimizing-BWE-with-ML-Hero_Figure-7b.png?w=1024" alt="" width="1024" height="576" srcset="https://engineering.fb.com/wp-content/uploads/2024/03/Optimizing-BWE-with-ML-Hero_Figure-7b.png 2500w, https://engineering.fb.com/wp-content/uploads/2024/03/Optimizing-BWE-with-ML-Hero_Figure-7b.png?resize=580,326 580w, https://engineering.fb.com/wp-content/uploads/2024/03/Optimizing-BWE-with-ML-Hero_Figure-7b.png?resize=916,515 916w, https://engineering.fb.com/wp-content/uploads/2024/03/Optimizing-BWE-with-ML-Hero_Figure-7b.png?resize=768,432 768w, https://engineering.fb.com/wp-content/uploads/2024/03/Optimizing-BWE-with-ML-Hero_Figure-7b.png?resize=1024,576 1024w, https://engineering.fb.com/wp-content/uploads/2024/03/Optimizing-BWE-with-ML-Hero_Figure-7b.png?resize=1536,864 1536w, https://engineering.fb.com/wp-content/uploads/2024/03/Optimizing-BWE-with-ML-Hero_Figure-7b.png?resize=2048,1152 2048w, https://engineering.fb.com/wp-content/uploads/2024/03/Optimizing-BWE-with-ML-Hero_Figure-7b.png?resize=96,54 96w, https://engineering.fb.com/wp-content/uploads/2024/03/Optimizing-BWE-with-ML-Hero_Figure-7b.png?resize=192,108 192w" sizes="auto, (max-width: 992px) 100vw, 62vw"/>Figure 7: Simulated network scenario with alternating bandwidth for congestion prediction</p>
<h2><span style="font-weight: 400;">Generating training samples</span></h2>
<p><span style="font-weight: 400;">The main challenge in modeling is generating training samples for a variety of congestion situations. With simulations, it’s harder to capture different types of congestion that real user clients would encounter in production networks. As a result, we used actual production logs for labeling congestion samples, following the RTT-spikes criteria in the past and future windows according to the following assumptions:</span></p>
<ul>
<li style="font-weight: 400;" aria-level="1"><span style="font-weight: 400;">Absent past RTT spikes, packet losses in the past and future are independent.</span></li>
<li style="font-weight: 400;" aria-level="1"><span style="font-weight: 400;">Absent past RTT spikes, we cannot predict future RTT spikes or fractional losses (i.e., flosses).</span></li>
</ul>
<p><span style="font-weight: 400;">We split the time window into past (4 seconds) and future (4 seconds) for labeling.</span><span style="font-weight: 400;"><br /></span></p>
<p><img loading="lazy" decoding="async" class="size-large wp-image-21126" src="https://engineering.fb.com/wp-content/uploads/2024/03/Optimizing-BWE-with-ML-Hero_Figure-8.png?w=1024" alt="" width="1024" height="576" srcset="https://engineering.fb.com/wp-content/uploads/2024/03/Optimizing-BWE-with-ML-Hero_Figure-8.png 1999w, https://engineering.fb.com/wp-content/uploads/2024/03/Optimizing-BWE-with-ML-Hero_Figure-8.png?resize=580,326 580w, https://engineering.fb.com/wp-content/uploads/2024/03/Optimizing-BWE-with-ML-Hero_Figure-8.png?resize=916,516 916w, https://engineering.fb.com/wp-content/uploads/2024/03/Optimizing-BWE-with-ML-Hero_Figure-8.png?resize=768,432 768w, https://engineering.fb.com/wp-content/uploads/2024/03/Optimizing-BWE-with-ML-Hero_Figure-8.png?resize=1024,576 1024w, https://engineering.fb.com/wp-content/uploads/2024/03/Optimizing-BWE-with-ML-Hero_Figure-8.png?resize=1536,864 1536w, https://engineering.fb.com/wp-content/uploads/2024/03/Optimizing-BWE-with-ML-Hero_Figure-8.png?resize=96,54 96w, https://engineering.fb.com/wp-content/uploads/2024/03/Optimizing-BWE-with-ML-Hero_Figure-8.png?resize=192,108 192w" sizes="auto, (max-width: 992px) 100vw, 62vw"/>Figure 8: Labeling criteria for congestion prediction</p>
<h2><span style="font-weight: 400;">Model performance</span></h2>
<p><span style="font-weight: 400;">Unlike network characterization, where ground truth is unavailable, we can obtain ground truth by examining the future time window after it has passed and then comparing it with the prediction made four seconds earlier. With this logging information gathered from real production clients, we compared the performance in offline training to online data from user clients:</span></p>
<p><img loading="lazy" decoding="async" class="size-large wp-image-21127" src="https://engineering.fb.com/wp-content/uploads/2024/03/Optimizing-BWE-with-ML-Hero_Figure-9.png?w=1024" alt="" width="1024" height="576" srcset="https://engineering.fb.com/wp-content/uploads/2024/03/Optimizing-BWE-with-ML-Hero_Figure-9.png 1999w, https://engineering.fb.com/wp-content/uploads/2024/03/Optimizing-BWE-with-ML-Hero_Figure-9.png?resize=580,326 580w, https://engineering.fb.com/wp-content/uploads/2024/03/Optimizing-BWE-with-ML-Hero_Figure-9.png?resize=916,516 916w, https://engineering.fb.com/wp-content/uploads/2024/03/Optimizing-BWE-with-ML-Hero_Figure-9.png?resize=768,432 768w, https://engineering.fb.com/wp-content/uploads/2024/03/Optimizing-BWE-with-ML-Hero_Figure-9.png?resize=1024,576 1024w, https://engineering.fb.com/wp-content/uploads/2024/03/Optimizing-BWE-with-ML-Hero_Figure-9.png?resize=1536,864 1536w, https://engineering.fb.com/wp-content/uploads/2024/03/Optimizing-BWE-with-ML-Hero_Figure-9.png?resize=96,54 96w, https://engineering.fb.com/wp-content/uploads/2024/03/Optimizing-BWE-with-ML-Hero_Figure-9.png?resize=192,108 192w" sizes="auto, (max-width: 992px) 100vw, 62vw"/>Figure 9: Offline versus online model performance comparison.</p>
<h2><span style="font-weight: 400;">Experiment results</span></h2>
<p><span style="font-weight: 400;">Here are some highlights from our deployment of various ML models to improve bandwidth estimation:</span></p>
<h3><span style="font-weight: 400;">Reliability wins for congestion prediction</span></h3>
<p><span style="font-weight: 400;"><img src="https://s.w.org/images/core/emoji/15.0.3/72x72/2705.png" alt="✅" class="wp-smiley" style="height: 1em; max-height: 1em;" /></span> <span style="font-weight: 400;">connection_drop_rate -0.326371 +/- 0.216084<br /></span><span style="font-weight: 400;"><img src="https://s.w.org/images/core/emoji/15.0.3/72x72/2705.png" alt="✅" class="wp-smiley" style="height: 1em; max-height: 1em;" /> last_minute_quality_regression_v1 -0.421602 +/- 0.206063<br /></span><span style="font-weight: 400;"><img src="https://s.w.org/images/core/emoji/15.0.3/72x72/2705.png" alt="✅" class="wp-smiley" style="height: 1em; max-height: 1em;" /> last_minute_quality_regression_v2 -0.371398 +/- 0.196064<br /></span><span style="font-weight: 400;"><img src="https://s.w.org/images/core/emoji/15.0.3/72x72/2705.png" alt="✅" class="wp-smiley" style="height: 1em; max-height: 1em;" /> bad_experience_percentage -0.230152 +/- 0.148308<br /></span><span style="font-weight: 400;"><img src="https://s.w.org/images/core/emoji/15.0.3/72x72/2705.png" alt="✅" class="wp-smiley" style="height: 1em; max-height: 1em;" /> transport_not_ready_pct -0.437294 +/- 0.400812</span></p>
<p><span style="font-weight: 400;"><img src="https://s.w.org/images/core/emoji/15.0.3/72x72/2705.png" alt="✅" class="wp-smiley" style="height: 1em; max-height: 1em;" /></span><span style="font-weight: 400;"> peer_video_freeze_percentage -0.749419 +/- 0.180661<br /></span><span style="font-weight: 400;"><img src="https://s.w.org/images/core/emoji/15.0.3/72x72/2705.png" alt="✅" class="wp-smiley" style="height: 1em; max-height: 1em;" /> peer_video_freeze_percentage_above_500ms -0.438967 +/- 0.212394</span></p>
<h3><span style="font-weight: 400;">Quality and user engagement wins for random packet loss characterization in high bandwidth</span></h3>
<p><span style="font-weight: 400;"><img src="https://s.w.org/images/core/emoji/15.0.3/72x72/2705.png" alt="✅" class="wp-smiley" style="height: 1em; max-height: 1em;" /></span><span style="font-weight: 400;"> peer_video_freeze_percentage -0.379246 +/- 0.124718<br /></span><span style="font-weight: 400;"><img src="https://s.w.org/images/core/emoji/15.0.3/72x72/2705.png" alt="✅" class="wp-smiley" style="height: 1em; max-height: 1em;" /> peer_video_freeze_percentage_above_500ms -0.541780 +/- 0.141212<br /></span><span style="font-weight: 400;"><img src="https://s.w.org/images/core/emoji/15.0.3/72x72/2705.png" alt="✅" class="wp-smiley" style="height: 1em; max-height: 1em;" /> peer_neteq_plc_cng_perc -0.242295 +/- 0.137200</span></p>
<p><span style="font-weight: 400;"><img src="https://s.w.org/images/core/emoji/15.0.3/72x72/2705.png" alt="✅" class="wp-smiley" style="height: 1em; max-height: 1em;" /> total_talk_time 0.154204 +/- 0.148788</span></p>
<h3><span style="font-weight: 400;">Reliability and quality wins for cellular low bandwidth classification</span></h3>
<p><span style="font-weight: 400;"><img src="https://s.w.org/images/core/emoji/15.0.3/72x72/2705.png" alt="✅" class="wp-smiley" style="height: 1em; max-height: 1em;" /> connection_drop_rate -0.195908 +/- 0.127956<br /></span><span style="font-weight: 400;"><img src="https://s.w.org/images/core/emoji/15.0.3/72x72/2705.png" alt="✅" class="wp-smiley" style="height: 1em; max-height: 1em;" /> last_minute_quality_regression_v1 -0.198618 +/- 0.124958<br /></span><span style="font-weight: 400;"><img src="https://s.w.org/images/core/emoji/15.0.3/72x72/2705.png" alt="✅" class="wp-smiley" style="height: 1em; max-height: 1em;" /> last_minute_quality_regression_v2 -0.188115 +/- 0.138033</span></p>
<p><span style="font-weight: 400;"><img src="https://s.w.org/images/core/emoji/15.0.3/72x72/2705.png" alt="✅" class="wp-smiley" style="height: 1em; max-height: 1em;" /> peer_neteq_plc_cng_perc -0.359957 +/- 0.191557<br /></span><span style="font-weight: 400;"><img src="https://s.w.org/images/core/emoji/15.0.3/72x72/2705.png" alt="✅" class="wp-smiley" style="height: 1em; max-height: 1em;" /> peer_video_freeze_percentage -0.653212 +/- 0.142822</span></p>
<h3><span style="font-weight: 400;">Reliability and quality wins for cellular high bandwidth classification</span></h3>
<p><span style="font-weight: 400;"><img src="https://s.w.org/images/core/emoji/15.0.3/72x72/2705.png" alt="✅" class="wp-smiley" style="height: 1em; max-height: 1em;" /> avg_sender_video_encode_fps 0.152003 +/- 0.046807<br /></span><span style="font-weight: 400;"><img src="https://s.w.org/images/core/emoji/15.0.3/72x72/2705.png" alt="✅" class="wp-smiley" style="height: 1em; max-height: 1em;" /> avg_sender_video_qp -0.228167 +/- 0.041793<br /></span><span style="font-weight: 400;"><img src="https://s.w.org/images/core/emoji/15.0.3/72x72/2705.png" alt="✅" class="wp-smiley" style="height: 1em; max-height: 1em;" /> avg_video_quality_score 0.296694 +/- 0.043079<br /></span><span style="font-weight: 400;"><img src="https://s.w.org/images/core/emoji/15.0.3/72x72/2705.png" alt="✅" class="wp-smiley" style="height: 1em; max-height: 1em;" /> avg_video_sent_bitrate 0.430266 +/- 0.092045</span></p>
<h2><span style="font-weight: 400;">Future plans for applying ML to RTC</span></h2>
<p><span style="font-weight: 400;">From our project execution and experimentation on production clients, we noticed that a ML-based approach is more efficient in targeting, end-to-end monitoring, and updating than traditional hand-tuned rules for networking. However, the efficiency of ML solutions largely depends on data quality and labeling (using simulations or production logs). By applying ML-based solutions to solving network prediction problems – congestion in particular – we fully leveraged the power of ML. </span></p>
<p><span style="font-weight: 400;">In the future, we will be consolidating all the network characterization models into a single model using the multi-task approach to fix the inefficiency due to redundancy in model download, inference, and so on. We will be building a shared representation model for the time series to solve different tasks (e.g., bandwidth classification, packet loss classification, etc.) in network characterization. We will focus on building realistic production network scenarios for model training and validation. This will enable us to use ML to identify optimal network actions given the network conditions. We will persist in refining our learning-based methods to enhance network performance by considering existing network signals.</span></p>The post <a href="https://dailyzsocialmedianews.com/optimizing-rtc-bandwidth-estimation-with-machine-studying/">Optimizing RTC bandwidth estimation with machine studying</a> first appeared on <a href="https://dailyzsocialmedianews.com">DAILY ZSOCIAL MEDIA NEWS</a>.]]></content:encoded>
					
		
		
			</item>
	</channel>
</rss>
