<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>Engineering | DAILY ZSOCIAL MEDIA NEWS</title>
	<atom:link href="https://dailyzsocialmedianews.com/tag/engineering/feed/" rel="self" type="application/rss+xml" />
	<link>https://dailyzsocialmedianews.com</link>
	<description>ALL ABOUT DAILY ZSOCIAL MEDIA NEWS</description>
	<lastBuildDate>Tue, 12 Mar 2024 15:22:59 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.7.1</generator>

 
	<item>
		<title>Constructing Meta’s GenAI Infrastructure &#8211; Engineering at Meta</title>
		<link>https://dailyzsocialmedianews.com/constructing-metas-genai-infrastructure-engineering-at-meta/</link>
		
		<dc:creator><![CDATA[]]></dc:creator>
		<pubDate>Tue, 12 Mar 2024 15:22:58 +0000</pubDate>
				<category><![CDATA[Facebook]]></category>
		<category><![CDATA[Building]]></category>
		<category><![CDATA[Engineering]]></category>
		<category><![CDATA[GenAI]]></category>
		<category><![CDATA[infrastructure]]></category>
		<category><![CDATA[Meta]]></category>
		<category><![CDATA[Metas]]></category>
		<guid isPermaLink="false">https://dailyzsocialmedianews.com/?p=24857</guid>

					<description><![CDATA[<div style="margin-bottom:20px;"><img width="1023" height="733" src="https://social-media-news.s3.amazonaws.com/wp-content/uploads/2024/03/12152256/Building-Metas-GenAI-Infrastructure-Engineering-at-Meta.png" class="attachment-post-thumbnail size-post-thumbnail wp-post-image" alt="Building Meta’s GenAI Infrastructure - Engineering at Meta" decoding="async" fetchpriority="high" srcset="https://social-media-news.s3.amazonaws.com/wp-content/uploads/2024/03/12152256/Building-Metas-GenAI-Infrastructure-Engineering-at-Meta.png 1023w, https://social-media-news.s3.amazonaws.com/wp-content/uploads/2024/03/12152256/Building-Metas-GenAI-Infrastructure-Engineering-at-Meta-300x215.png 300w, https://social-media-news.s3.amazonaws.com/wp-content/uploads/2024/03/12152256/Building-Metas-GenAI-Infrastructure-Engineering-at-Meta-768x550.png 768w" sizes="(max-width: 1023px) 100vw, 1023px" /></div><p>Marking a major investment in Meta’s AI future, we are announcing two 24k GPU clusters. We are sharing details on the hardware, network, storage, design, performance, and software that help us extract high throughput and reliability for various AI workloads. We use this cluster design for Llama 3 training. We are strongly committed to open [&#8230;]</p>
The post <a href="https://dailyzsocialmedianews.com/constructing-metas-genai-infrastructure-engineering-at-meta/">Constructing Meta’s GenAI Infrastructure – Engineering at Meta</a> first appeared on <a href="https://dailyzsocialmedianews.com">DAILY ZSOCIAL MEDIA NEWS</a>.]]></description>
										<content:encoded><![CDATA[<div style="margin-bottom:20px;"><img width="1023" height="733" src="https://social-media-news.s3.amazonaws.com/wp-content/uploads/2024/03/12152256/Building-Metas-GenAI-Infrastructure-Engineering-at-Meta.png" class="attachment-post-thumbnail size-post-thumbnail wp-post-image" alt="Building Meta’s GenAI Infrastructure - Engineering at Meta" decoding="async" srcset="https://social-media-news.s3.amazonaws.com/wp-content/uploads/2024/03/12152256/Building-Metas-GenAI-Infrastructure-Engineering-at-Meta.png 1023w, https://social-media-news.s3.amazonaws.com/wp-content/uploads/2024/03/12152256/Building-Metas-GenAI-Infrastructure-Engineering-at-Meta-300x215.png 300w, https://social-media-news.s3.amazonaws.com/wp-content/uploads/2024/03/12152256/Building-Metas-GenAI-Infrastructure-Engineering-at-Meta-768x550.png 768w" sizes="(max-width: 1023px) 100vw, 1023px" /></div><p></p>
<ul>
<li style="font-weight: 400;" aria-level="1"><span style="font-weight: 400;">Marking a major investment in Meta’s AI future, we are announcing two 24k GPU clusters. We are sharing details on the hardware, network, storage, design, performance, and software that help us extract high throughput and reliability for various AI workloads. We use this cluster design for Llama 3 training.</span></li>
<li style="font-weight: 400;" aria-level="1"><span style="font-weight: 400;">We are strongly committed to open compute and open source. We built these clusters on top of </span><span style="font-weight: 400;">Grand Teton</span><span style="font-weight: 400;">, </span><span style="font-weight: 400;">OpenRack</span><span style="font-weight: 400;">, and </span><span style="font-weight: 400;">PyTorch</span><span style="font-weight: 400;"> and continue to push open innovation across the industry.</span></li>
<li style="font-weight: 400;" aria-level="1"><span style="font-weight: 400;">This announcement is one step in our ambitious infrastructure roadmap. By the end of 2024, we’re aiming to continue to grow our infrastructure build-out that will include 350,000 NVIDIA H100 GPUs as part of a portfolio that will feature compute power equivalent to nearly 600,000 H100s.</span></li>
</ul>
<p><span style="font-weight: 400;">To lead in developing AI means leading investments in hardware infrastructure. Hardware infrastructure plays an important role in AI’s future. Today, we’re sharing details on two versions of our </span><span style="font-weight: 400;">24,576-GPU data center scale cluster at Meta. These clusters support our current and next generation AI models, including Llama 3, the successor to</span> <span style="font-weight: 400;">Llama 2</span><span style="font-weight: 400;">, our publicly released LLM, as well as AI research and development across GenAI and other areas .</span></p>
<h2><span style="font-weight: 400;">A peek into Meta’s large-scale AI clusters</span></h2>
<p><span style="font-weight: 400;">Meta’s long-term vision is to build artificial general intelligence (AGI) that is open and built responsibly so that it can be widely available for everyone to benefit from. As we work towards AGI, we have also worked on scaling our clusters to power this ambition. The progress we make towards AGI creates new products,</span> <span style="font-weight: 400;">new AI features for our family of apps</span><span style="font-weight: 400;">, and new AI-centric computing devices. </span></p>
<p><span style="font-weight: 400;">While we’ve had a long history of building AI infrastructure, we first shared details on our </span><span style="font-weight: 400;">AI Research SuperCluster (RSC)</span><span style="font-weight: 400;">, featuring 16,000 NVIDIA A100 GPUs, in 2022. RSC has accelerated our open and responsible AI research by helping us build our first generation of advanced AI models. It played and continues to play an important role in the development of </span><span style="font-weight: 400;">Llama</span><span style="font-weight: 400;"> and </span><span style="font-weight: 400;">Llama 2</span><span style="font-weight: 400;">, as well as advanced AI models for applications ranging from computer vision, NLP, and speech recognition, to</span> <span style="font-weight: 400;">image generation</span><span style="font-weight: 400;">, and even</span> <span style="font-weight: 400;">coding</span><span style="font-weight: 400;">.</span></p>
</p>
<h2><span style="font-weight: 400;">Under the hood</span></h2>
<p><span style="font-weight: 400;">Our newer AI clusters build upon the successes and lessons learned from RSC. We focused on building end-to-end AI systems with a major emphasis on researcher and developer experience and productivity. The efficiency of the high-performance network fabrics within these clusters, some of the key storage decisions, combined with the 24,576 NVIDIA Tensor Core H100 GPUs in each, allow both cluster versions to support models larger and more complex than that could be supported in the RSC and pave the way for advancements in GenAI product development and AI research.</span></p>
<h3><span style="font-weight: 400;">Network</span></h3>
<p><span style="font-weight: 400;">At Meta, we handle hundreds of trillions of AI model executions per day. Delivering these services at a large scale requires a highly advanced and flexible infrastructure. Custom designing much of our own hardware, software, and network fabrics allows us to optimize the end-to-end experience for our AI researchers while ensuring our data centers operate efficiently. </span></p>
<p><span style="font-weight: 400;">With this in mind, we built one cluster with a remote direct memory access (RDMA) over converged Ethernet (RoCE) network fabric solution based on the </span><span style="font-weight: 400;">Arista 7800</span><span style="font-weight: 400;"> with </span><span style="font-weight: 400;">Wedge400</span><span style="font-weight: 400;"> and </span><span style="font-weight: 400;">Minipack2</span><span style="font-weight: 400;"> OCP rack switches. The other cluster features an </span><span style="font-weight: 400;">NVIDIA Quantum2 InfiniBand</span><span style="font-weight: 400;"> fabric. Both of these solutions interconnect 400 Gbps endpoints. With these two, we are able to assess the suitability and scalability of these </span><span style="font-weight: 400;">different types of interconnect for large-scale training,</span><span style="font-weight: 400;"> giving us more insights that will help inform how we design and build even larger, scaled-up clusters in the future. Through careful co-design of the network, software, and model architectures, we have successfully used both RoCE and InfiniBand clusters for large, GenAI workloads (including our ongoing training of Llama 3 on our RoCE cluster) without any network bottlenecks.</span></p>
<h3><span style="font-weight: 400;">Compute</span></h3>
<p><span style="font-weight: 400;">Both clusters are built using</span> <span style="font-weight: 400;">Grand Teton</span><span style="font-weight: 400;">, our in-house-designed, open GPU hardware platform that we’ve contributed to the Open Compute Project (OCP). Grand Teton builds on the many generations of AI systems that integrate power, control, compute, and fabric interfaces into a single chassis for better overall performance, signal integrity, and thermal performance. It provides rapid scalability and flexibility in a simplified design, allowing it to be quickly deployed into data center fleets and easily maintained and scaled. Combined with other in-house innovations like our</span> <span style="font-weight: 400;">Open Rack</span><span style="font-weight: 400;"> power and rack architecture, Grand Teton allows us to build new clusters in a way that is purpose-built for current and future applications at Meta.</span></p>
<p><span style="font-weight: 400;">We have been openly designing our GPU hardware platforms beginning with our </span><span style="font-weight: 400;">Big Sur platform in 2015</span><span style="font-weight: 400;">.</span></p>
<h3><span style="font-weight: 400;">Storage</span></h3>
<p><span style="font-weight: 400;">Storage plays an important role in AI training, and yet is one of the least talked-about aspects. As the GenAI training jobs become more multimodal over time, consuming large amounts of image, video, and text data, the need for data storage grows rapidly. The need to fit all that data storage into a performant, yet power-efficient footprint doesn’t go away though, which makes the problem more interesting.</span></p>
<p><span style="font-weight: 400;">Our storage deployment addresses the data and checkpointing needs of the AI clusters via a home-grown Linux Filesystem in Userspace (FUSE) API backed by a version of Meta’s </span><span style="font-weight: 400;">‘Tectonic’ distributed storage solution</span><span style="font-weight: 400;"> optimized for Flash media. This solution enables thousands of GPUs to save and load checkpoints in a synchronized fashion (a </span><span style="font-weight: 400;">challenge</span><span style="font-weight: 400;"> for any storage solution) while also providing a flexible and high-throughput exabyte scale storage required for data loading.</span></p>
<p><span style="font-weight: 400;">We have also partnered with </span><span style="font-weight: 400;">Hammerspace</span><span style="font-weight: 400;"> to co-develop and land a parallel network file system (NFS) deployment to meet the developer experience requirements for this AI cluster. Among other benefits, Hammerspace enables engineers to perform interactive debugging for jobs using thousands of GPUs as code changes are immediately accessible to all nodes within the environment. When paired together, the combination of our Tectonic distributed storage solution and Hammerspace enable fast iteration velocity without compromising on scale.     </span></p>
<p><span style="font-weight: 400;">The storage deployments in our GenAI clusters, both Tectonic- and Hammerspace-backed, are based on the </span><span style="font-weight: 400;">YV3 Sierra Point server platform</span><span style="font-weight: 400;">, upgraded with the latest high capacity E1.S SSD we can procure in the market today. Aside from the higher SSD capacity, the servers per rack was customized to achieve the right balance of throughput capacity per server, rack count reduction, and associated power efficiency. Utilizing the OCP servers as Lego-like building blocks, our storage layer is able to flexibly scale to future requirements in this cluster as well as in future, bigger AI clusters, while being fault-tolerant to day-to-day Infrastructure maintenance operations.</span></p>
<h3><span style="font-weight: 400;">Performance</span></h3>
<p><span style="font-weight: 400;">One of the principles we have in building our large-scale AI clusters is to maximize performance and ease of use simultaneously without compromising one for the other. This is an important principle in creating the best-in-class AI models. </span></p>
<p><span style="font-weight: 400;">As we push the limits of AI systems, the best way we can test our ability to scale-up our designs is to simply build a system, optimize it, and actually test it (while simulators help, they only go so far). In this design journey, we compared the performance seen in our small clusters and with large clusters to see where our bottlenecks are. In the graph below, AllGather collective performance is shown (as normalized bandwidth on a 0-100 scale) when a large number of GPUs are communicating with each other at message sizes where roofline performance is expected. </span></p>
<p><span style="font-weight: 400;">Our out-of-box performance for large clusters was initially poor and inconsistent, compared to optimized small cluster performance. To address this we made several changes to how our internal job scheduler schedules jobs with network topology awareness – this resulted in latency benefits and minimized the amount of traffic going to upper layers of the network. We also optimized our network routing strategy in combination with NVIDIA Collective Communications Library (NCCL) changes to achieve optimal network utilization. This helped push our large clusters to achieve great and expected performance just as our small clusters.</span></p>
<p><img decoding="async" class="size-large wp-image-21048" src="https://engineering.fb.com/wp-content/uploads/2024/03/Meta-24K-GenAi-clusters-performance.png?w=1024" alt="" width="1024" height="768" srcset="https://engineering.fb.com/wp-content/uploads/2024/03/Meta-24K-GenAi-clusters-performance.png 1999w, https://engineering.fb.com/wp-content/uploads/2024/03/Meta-24K-GenAi-clusters-performance.png?resize=916,687 916w, https://engineering.fb.com/wp-content/uploads/2024/03/Meta-24K-GenAi-clusters-performance.png?resize=768,576 768w, https://engineering.fb.com/wp-content/uploads/2024/03/Meta-24K-GenAi-clusters-performance.png?resize=1024,768 1024w, https://engineering.fb.com/wp-content/uploads/2024/03/Meta-24K-GenAi-clusters-performance.png?resize=1536,1152 1536w, https://engineering.fb.com/wp-content/uploads/2024/03/Meta-24K-GenAi-clusters-performance.png?resize=96,72 96w, https://engineering.fb.com/wp-content/uploads/2024/03/Meta-24K-GenAi-clusters-performance.png?resize=192,144 192w" sizes="(max-width: 992px) 100vw, 62vw"/>In the figure we see that small cluster performance (overall communication bandwidth and utilization) reaches 90%+ out of the box, but an unoptimized large cluster performance has very poor utilization, ranging from 10% to 90%. After we optimize the full system (software, network, etc.), we see large cluster performance return to the ideal 90%+ range.</p>
<p><span style="font-weight: 400;">In addition to software changes targeting our internal infrastructure, we worked closely with teams authoring training frameworks and models to adapt to our evolving infrastructure. For example, NVIDIA H100 GPUs open the possibility of leveraging new data types such as 8-bit floating point (FP8) for training. Fully utilizing larger clusters required investments in additional parallelization techniques and new storage solutions provided opportunities to highly optimize checkpointing across thousands of ranks to run in hundreds of milliseconds.</span></p>
<p><span style="font-weight: 400;">We also recognize debuggability as one of the major challenges in large-scale training. Identifying a problematic GPU that is stalling an entire training job becomes very difficult at a large scale. We’re building tools such as desync debug, or a distributed collective flight recorder, to expose the details of distributed training, and help identify issues in a much faster and easier way</span></p>
<p><span style="font-weight: 400;">Finally, we’re continuing to evolve PyTorch, the foundational AI framework powering our AI workloads, to make it ready for tens, or even hundreds, of thousands of GPU training. We have identified multiple bottlenecks for process group initialization, and reduced the startup time from sometimes hours down to minutes. </span></p>
<h2><span style="font-weight: 400;">Commitment to open AI innovation</span></h2>
<p><span style="font-weight: 400;">Meta maintains its commitment to open innovation in AI software and hardware. We believe open-source hardware and software will always be a valuable tool to help the industry solve problems at large scale.</span></p>
<p><span style="font-weight: 400;">Today, we continue to support</span> <span style="font-weight: 400;">open hardware innovation</span><span style="font-weight: 400;"> as a founding member of OCP, where we make designs like Grand Teton and Open Rack available to the OCP community. We also continue to be the largest and primary contributor to </span><span style="font-weight: 400;">PyTorch</span><span style="font-weight: 400;">, the AI software framework that is powering a large chunk of the industry.</span></p>
<p><span style="font-weight: 400;">We also continue to be committed to open innovation in the AI research community. We’ve launched the</span> <span style="font-weight: 400;">Open Innovation AI Research Community</span><span style="font-weight: 400;">, a partnership program for academic researchers to deepen our understanding of how to responsibly develop and share AI technologies – with a particular focus on LLMs.</span></p>
<p><span style="font-weight: 400;">An open approach to AI is not new for Meta. We’ve also launched the </span><span style="font-weight: 400;">AI Alliance</span><span style="font-weight: 400;">, a group of leading organizations across the AI industry focused on accelerating responsible innovation in AI within an open community. Our AI efforts are built on a philosophy of open science and cross-collaboration. An open ecosystem brings transparency, scrutiny, and trust to AI development and leads to innovations that everyone can benefit from that are built with safety and responsibility top of mind. </span></p>
<h2><span style="font-weight: 400;">The future of Meta’s AI infrastructure</span></h2>
<p><span style="font-weight: 400;">These two AI training cluster designs are a part of our larger roadmap for the future of AI. By the end of 2024, we’re aiming to continue to grow our infrastructure build-out that will include 350,000 NVIDIA H100s as part of a portfolio that will feature compute power equivalent to nearly 600,000 H100s.</span></p>
<p><span style="font-weight: 400;">As we look to the future, we recognize that what worked yesterday or today may not be sufficient for tomorrow’s needs. That’s why we are constantly evaluating and improving every aspect of our infrastructure, from the physical and virtual layers to the software layer and beyond. Our goal is to create systems that are flexible and reliable to support the fast-evolving new models and research.  </span></p>The post <a href="https://dailyzsocialmedianews.com/constructing-metas-genai-infrastructure-engineering-at-meta/">Constructing Meta’s GenAI Infrastructure – Engineering at Meta</a> first appeared on <a href="https://dailyzsocialmedianews.com">DAILY ZSOCIAL MEDIA NEWS</a>.]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Meta loves Python &#8211; Engineering at Meta</title>
		<link>https://dailyzsocialmedianews.com/meta-loves-python-engineering-at-meta/</link>
		
		<dc:creator><![CDATA[]]></dc:creator>
		<pubDate>Mon, 12 Feb 2024 16:58:40 +0000</pubDate>
				<category><![CDATA[Facebook]]></category>
		<category><![CDATA[Engineering]]></category>
		<category><![CDATA[loves]]></category>
		<category><![CDATA[Meta]]></category>
		<category><![CDATA[Python]]></category>
		<guid isPermaLink="false">https://dailyzsocialmedianews.com/?p=24687</guid>

					<description><![CDATA[<p>By now you’re already aware that Python 3.12 has been released. But did you know that several of its new features were developed by Meta? Meta engineer Pascal Hartig (@passy) is joined on the Meta Tech Podcast by Itamar Oren and Carl Meyer, two software engineers at Meta, to discuss their teams’ contributions to the [&#8230;]</p>
The post <a href="https://dailyzsocialmedianews.com/meta-loves-python-engineering-at-meta/">Meta loves Python – Engineering at Meta</a> first appeared on <a href="https://dailyzsocialmedianews.com">DAILY ZSOCIAL MEDIA NEWS</a>.]]></description>
										<content:encoded><![CDATA[<p></p>
<p>By now you’re already aware that Python 3.12 has been released. But did you know that several of its new features were developed by Meta?</p>
<p>Meta engineer Pascal Hartig (@passy) is joined on the Meta Tech Podcast by Itamar Oren and Carl Meyer, two software engineers at Meta, to discuss their teams’ contributions to the latest Python release, including new hooks that allow for custom JITs like Cinder, Immortal Objects, improvements to the type system, faster comprehensions, and more.</p>
<p>Learn how and why they built these new features for Python and how they worked with and engaged with the Python community.</p>
<p>Download or listen to the episode below:</p>
<p><iframe loading="lazy" style="border: none;" title="Libsyn Player" src="https://html5-player.libsyn.com/embed/episode/id/29730333/height/90/theme/custom/thumbnail/yes/direction/forward/render-playlist/no/custom-color/000000/" width="100%" height="90" scrolling="no" allowfullscreen="allowfullscreen"></iframe></p>
<p>You can also find the episode wherever you get your podcasts, including:</p>
<p>Spotify<br />Apple Podcasts<br />PocketCasts<br />Castro<br />Overcast</p>
<p>The Meta Tech Podcast is a podcast, brought to you by Meta, where we highlight the work Meta’s engineers are doing at every level – from low-level frameworks to end-user features.</p>
<p>Send us feedback on Instagram, Threads, or X.</p>
<p>And if you’re interested in learning more about career opportunities at Meta visit the Meta Careers page.</p>The post <a href="https://dailyzsocialmedianews.com/meta-loves-python-engineering-at-meta/">Meta loves Python – Engineering at Meta</a> first appeared on <a href="https://dailyzsocialmedianews.com">DAILY ZSOCIAL MEDIA NEWS</a>.]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>DotSlash: Simplified executable deployment &#8211; Engineering at Meta</title>
		<link>https://dailyzsocialmedianews.com/dotslash-simplified-executable-deployment-engineering-at-meta/</link>
		
		<dc:creator><![CDATA[]]></dc:creator>
		<pubDate>Tue, 06 Feb 2024 15:15:06 +0000</pubDate>
				<category><![CDATA[Facebook]]></category>
		<category><![CDATA[deployment]]></category>
		<category><![CDATA[DotSlash]]></category>
		<category><![CDATA[Engineering]]></category>
		<category><![CDATA[executable]]></category>
		<category><![CDATA[Meta]]></category>
		<category><![CDATA[Simplified]]></category>
		<guid isPermaLink="false">https://dailyzsocialmedianews.com/?p=24656</guid>

					<description><![CDATA[<div style="margin-bottom:20px;"><img width="643" height="1024" src="https://social-media-news.s3.amazonaws.com/wp-content/uploads/2024/02/06151504/DotSlash-Simplified-executable-deployment-Engineering-at-Meta.png" class="attachment-post-thumbnail size-post-thumbnail wp-post-image" alt="DotSlash: Simplified executable deployment - Engineering at Meta" decoding="async" loading="lazy" srcset="https://social-media-news.s3.amazonaws.com/wp-content/uploads/2024/02/06151504/DotSlash-Simplified-executable-deployment-Engineering-at-Meta.png 643w, https://social-media-news.s3.amazonaws.com/wp-content/uploads/2024/02/06151504/DotSlash-Simplified-executable-deployment-Engineering-at-Meta-188x300.png 188w" sizes="auto, (max-width: 643px) 100vw, 643px" /></div><p>We’ve open sourced DotSlash, a tool that makes large executables available in source control with a negligible impact on repository size, thus avoiding I/O-heavy clone operations. With DotSlash, a set of platform-specific executables is replaced with a single script containing descriptors for the supported platforms. DotSlash handles transparently fetching, decompressing, and verifying the appropriate remote [&#8230;]</p>
The post <a href="https://dailyzsocialmedianews.com/dotslash-simplified-executable-deployment-engineering-at-meta/">DotSlash: Simplified executable deployment – Engineering at Meta</a> first appeared on <a href="https://dailyzsocialmedianews.com">DAILY ZSOCIAL MEDIA NEWS</a>.]]></description>
										<content:encoded><![CDATA[<div style="margin-bottom:20px;"><img width="643" height="1024" src="https://social-media-news.s3.amazonaws.com/wp-content/uploads/2024/02/06151504/DotSlash-Simplified-executable-deployment-Engineering-at-Meta.png" class="attachment-post-thumbnail size-post-thumbnail wp-post-image" alt="DotSlash: Simplified executable deployment - Engineering at Meta" decoding="async" loading="lazy" srcset="https://social-media-news.s3.amazonaws.com/wp-content/uploads/2024/02/06151504/DotSlash-Simplified-executable-deployment-Engineering-at-Meta.png 643w, https://social-media-news.s3.amazonaws.com/wp-content/uploads/2024/02/06151504/DotSlash-Simplified-executable-deployment-Engineering-at-Meta-188x300.png 188w" sizes="auto, (max-width: 643px) 100vw, 643px" /></div><p></p>
<ul>
<li style="font-weight: 400;" aria-level="1"><span style="font-weight: 400;">We’ve open sourced </span><span style="font-weight: 400;">DotSlash</span><span style="font-weight: 400;">, a tool that makes large executables available in source control with a negligible impact on repository size, thus avoiding I/O-heavy clone operations.</span></li>
<li style="font-weight: 400;" aria-level="1"><span style="font-weight: 400;">With DotSlash, a set of platform-specific executables is replaced with a single script containing descriptors for the supported platforms. DotSlash handles transparently fetching, decompressing, and verifying the appropriate remote artifact for the current operating system and CPU.</span></li>
<li style="font-weight: 400;" aria-level="1"><span style="font-weight: 400;">At Meta, the overwhelming majority of DotSlash files are generated and committed to source control via automation, so we are also releasing a complementary GitHub Action to assemble a comparable setup outside of Meta.</span></li>
<li style="font-weight: 400;" aria-level="1"><span style="font-weight: 400;">DotSlash is written in Rust for performance and is cross-platform.</span></li>
</ul>
<p><span style="font-weight: 400;">At Meta, we have a vast array of first-party and third-party command line tools that need to be available across a diverse range of developer environments. Reliably getting the appropriate version of each tool to the right place can be a challenging task.</span></p>
<p><span style="font-weight: 400;">For example, the source code for many of our first-party tools lives alongside the projects that leverage them inside our </span><span style="font-weight: 400;">massive monorepo</span><span style="font-weight: 400;">. For such tools, the standard practice is to use </span><span style="font-weight: 400;">buck2 run</span><span style="font-weight: 400;"> to build and run executables from source, as necessary. This has the advantage that tools and the projects that use them can be updated atomically in a single commit.</span></p>
<p><span style="font-weight: 400;">While we use extensive caching and </span><span style="font-weight: 400;">remote execution</span><span style="font-weight: 400;"> to provide our developers with fast builds, there will always be cases where buck2 run is going to be considerably slower than running the prebuilt binary directly. While we leverage a </span><span style="font-weight: 400;">virtual filesystem</span><span style="font-weight: 400;"> that reduces the drawbacks of checking large binaries into source control compared to a traditional physical filesystem, there are still pathological cases that are best avoided by keeping such files out of the repository in the first place. (This practice also eliminates a large class of code provenance issues.)</span></p>
<p><span style="font-weight: 400;">Further, not everything we use is built from source, nor do all of our tools live in source control. For example, there is the case of buck2 </span><span style="font-weight: 400;">itself, which needs to be pre-built for developers and readily available on the $PATH</span><span style="font-weight: 400;"> for convenience. For core developer tools like </span><span style="font-weight: 400;">Buck2</span><span style="font-weight: 400;"> and </span><span style="font-weight: 400;">Sapling</span><span style="font-weight: 400;">, we use a </span><span style="font-weight: 400;">Chef recipe</span> to deploy new versions, installing them in /usr/local/bin<span style="font-weight: 400;"> (or somewhere within the appropriate %PATH$% </span><span style="font-weight: 400;">on Windows) across a variety of developer environments.</span></p>
<p><span style="font-weight: 400;">While this approach is reasonable for commonly-used executables, it is not a great fit for the long tail of tools. That is, while it might be convenient to install everything a developer might need in /usr/local/bin</span><span style="font-weight: 400;"> by default, this could easily add up to tens or hundreds of gigabytes of disk, very little of which will end up being executed, in practice. In turn, this makes Chef runs more expensive and prone to failure.</span></p>
<h2><span style="font-weight: 400;">Introducing DotSlash</span></h2>
<p><span style="font-weight: 400;">DotSlash attempts to solve many of the problems described in the previous section. While </span><span style="font-weight: 400;">we do not claim it is a silver bullet</span><span style="font-weight: 400;">, we have found it to be the right solution for many of our internal use cases. At Meta, DotSlash is executed </span><span style="font-weight: 400;">hundreds of millions of times per day</span><span style="font-weight: 400;"> to deliver a mix of first-party and third-party tools to end-user developers as well as hermetic build environments.</span></p>
<p><span style="font-weight: 400;">The idea is fairly simple: we replace the contents of a set of platform-specific, heavyweight executables with a single lightweight text file that can be read by the dotslash </span><span style="font-weight: 400;">command line tool (which must be installed on the user’s $PATH</span><span style="font-weight: 400;">). We call such a file a </span>DotSlash file<span style="font-weight: 400;">. It contains the information DotSlash needs to fetch and run the executable it replaces for the host platform. By convention, a DotSlash file maintains the name of the original file rather than calling attention to itself via a custom file extension. Instead, it aspires to be a transparent wrapper for the original executable. To that end, a DotSlash file is </span><span style="font-weight: 400;">required</span><span style="font-weight: 400;"> to start with #!/usr/bin/env dotslash</span><span style="font-weight: 400;"> (even on Windows) to help maintain this illusion.</span></p>
<p><span style="font-weight: 400;">The following is a hypothetical DotSlash file named node </span><span style="font-weight: 400;">that is designed to run v18.19.0 of Node.js. Note that users across x86 Linux, x86 macOS, and ARM macOS can all run the </span><span style="font-weight: 400;">same</span><span style="font-weight: 400;"> DotSlash file, as DotSlash will take care of doing the work to select the appropriate executable for the host on which it is being run. In this way, DotSlash simplifies the work of cross-platform releases: </span></p>
<p><span style="font-weight: 400;">In this example, the workflow DotSlash runs through when executing node</span><span style="font-weight: 400;"> looks like: </span></p>
<p><span style="font-weight: 400;">See the </span><span style="font-weight: 400;">How DotSlash Works</span><span style="font-weight: 400;"> documentation for details.</span></p>
<p><span style="font-weight: 400;">Because of how </span><span style="font-weight: 400; font-family: 'courier new', courier; color: #339966;">#!</span><span style="font-weight: 400;"> works on Mac and Linux, when a user runs ./node &#8211;version, </span><span style="font-weight: 400;">the invocation effectively becomes dotslash ./node &#8211;version</span><span style="font-weight: 400;">. DotSlash requires that its first argument is a file that starts with #!/usr/bin/env dotslash</span><span style="font-weight: 400;">, as mentioned above. Once it verifies the header, it uses a </span><span style="font-weight: 400;">lenient JSON parser</span><span style="font-weight: 400;"> to read the rest of the file. DotSlash finds the entry in the &#8220;platforms&#8221;</span><span style="font-weight: 400;"> </span><span style="font-weight: 400;">section that corresponds to the host it is running on.</span></p>
<p><span style="font-weight: 400;">DotSlash uses the information in this entry and hashes it to compute a corresponding file path (that doubles as a key) in the user’s local DotSlash cache. DotSlash attempts to exec </span><span style="font-weight: 400;">the corresponding file, replacing argv0 </span><span style="font-weight: 400;">with the path to the DotSlash file and forwarding the remaining command line arguments (&#8211;version</span><span style="font-weight: 400;">, in this example) to the exec </span><span style="font-weight: 400;">invocation.</span></p>
<p><span style="font-weight: 400;">If the target executable is in the cache, the user immediately runs Node.js as originally intended. In the event of a cache miss (indicated by exec </span><span style="font-weight: 400;">failing with ENOENT</span><span style="font-weight: 400;">), DotSlash uses the information from the DotSlash file to determine the URL it should use to fetch the artifact containing the executable as well as the size and digest information it should use to verify the contents. If this succeeds, the verified artifact is atomically mv‘d </span><span style="font-weight: 400;">into the appropriate location in the DotSlash cache and the exec </span><span style="font-weight: 400;">invocation is performed again. Note that DotSlash uses </span><span style="font-weight: 400;">advisory file locking</span><span style="font-weight: 400;"> to avoid making duplicate requests even if DotSlash files requiring the same artifact are run concurrently.</span></p>
<p><span style="font-weight: 400;">Note that it is common to have multiple DotSlash files refer to the same artifact, </span><span style="font-weight: 400;">such as a </span><span style="font-weight: 400;">.tar.zst</span><span style="font-weight: 400;"> file</span><span style="font-weight: 400;">, while each DotSlash file maps to a distinct entry within the archive. For example, suppose </span><span style="font-weight: 400;">node-v18.19.0-darwin-arm64.tar.gz</span><span style="font-weight: 400;"> is a compressed </span><span style="font-weight: 400;">tar</span><span style="font-weight: 400;"> file that contains many entries, including node , npm , and npx</span><span style="font-weight: 400;">. The DotSlash file for </span><span style="font-weight: 400;">node</span><span style="font-weight: 400;"> would be as follows:</span></p>
<p>#!/usr/bin/env dotslash</p>
<p>{<br />
  &#8220;name&#8221;: &#8220;node-v18.19.0&#8221;,<br />
  &#8220;platforms&#8221;: {<br />
    &#8220;macos-aarch64&#8221;: {<br />
      &#8220;size&#8221;: 40660307,<br />
      &#8220;hash&#8221;: &#8220;blake3&#8221;,<br />
      &#8220;digest&#8221;: &#8220;6e2ca33951e586e7670016dd9e503d028454bf9249d5ff556347c3d98c347c34&#8221;,<br />
      // Note the difference from the previous example where &#8220;format&#8221;: &#8220;zst&#8221; has been<br />
      // replaced with &#8220;format&#8221;: &#8220;tar.gz&#8221;, which specifies what type of decompression<br />
      // logic to use as well as the path within the decompressed archive to run when<br />
      // this DotSlash file is executed.<br />
      &#8220;format&#8221;: &#8220;tar.gz&#8221;,<br />
      // Assuming node-v18.19.0-darwin-arm64.tar.gz contains node, npm, and npx in the<br />
      // node-v18.19.0-darwin-arm64/bin/ folder within the the archive, the following<br />
      // is the only line that has to change in the DotSlash file that represents<br />
      // those other executables.<br />
      &#8220;path&#8221;: &#8220;node-v18.19.0-darwin-arm64/bin/node&#8221;,<br />
      &#8220;providers&#8221;: [<br />
        {<br />
          &#8220;url&#8221;: &#8220;https://nodejs.org/dist/v18.19.0/node-v18.19.0-darwin-arm64.tar.gz&#8221;<br />
        }<br />
      ]<br />
    },<br />
    /* other platforms omitted for brevity */<br />
  }<br />
}</p>
<p><span style="font-weight: 400;">As noted in the comments, the only change in the DotSlash files for npm </span><span style="font-weight: 400;">and npx </span><span style="font-weight: 400;">would be the &#8220;path&#8221;</span><span style="font-weight: 400;"> entry. Because the artifact for all three DotSlash files would be the same, whichever DotSlash file was run first would fetch the artifact and put it in the cache whereas all subsequent runs of </span><span style="font-weight: 400;">any</span><span style="font-weight: 400;"> of the three DotSlash files would leverage the cached entry.</span></p>
<p><span style="font-weight: 400;">This technique is often used to ensure that a set of complementary executables is released together. Further, because the archive will be decompressed in its own directory, it may also contain resource files (or library files, such as .dll </span><span style="font-weight: 400;">files that need to live alongside .exe </span><span style="font-weight: 400;">files on Windows) that will be unpacked using the directory structure specified by the archive. This also makes DotSlash a good fit for distributing executables that are not binaries, but trees of script files, which is common for Node.js or Python.</span></p>
<h2><span style="font-weight: 400;">Generating DotSlash files</span></h2>
<p><span style="font-weight: 400;">At Meta, most DotSlash files are produced as part of an automated build pipeline. Our continuous integration (CI) system supports special configuration for DotSlash jobs where a user must specify:</span></p>
<ul>
<li style="font-weight: 400;" aria-level="1"><span style="font-weight: 400;">A set of builds to run (these can span multiple platforms).</span></li>
<li style="font-weight: 400;" aria-level="1"><span style="font-weight: 400;">The resulting generated artifacts to publish to an internal blobstore.</span></li>
<li style="font-weight: 400;" aria-level="1"><span style="font-weight: 400;">The DotSlash files in source control to update with entries for the new artifacts.</span></li>
<li style="font-weight: 400;" aria-level="1"><span style="font-weight: 400;">The conditions under which the job should be triggered (this is analogous to </span><span style="font-weight: 400;">workflow triggers on GitHub</span><span style="font-weight: 400;">).</span></li>
</ul>
<p><span style="font-weight: 400;">The result of such a job is a proposed change to the codebase containing the updated DotSlash files. At Meta, we call such a change a </span><span style="font-weight: 400;">“diff,”</span><span style="font-weight: 400;"> though on GitHub, this is known as a </span><span style="font-weight: 400;">pull request</span><span style="font-weight: 400;">. Just like an ordinary human-authored diff at Meta, putting it up for review triggers a number of jobs that include linters, automated tests, and other tools that provide signal on the proposed change. For a DotSlash diff, if all of the signals come back clean, the diff is automatically committed to the codebase without further human intervention.</span></p>
<p><img loading="lazy" decoding="async" class="size-large wp-image-20910" src="https://engineering.fb.com/wp-content/uploads/2024/02/DotSlash_2.png?w=572" alt="" width="572" height="560" srcset="https://engineering.fb.com/wp-content/uploads/2024/02/DotSlash_2.png 572w, https://engineering.fb.com/wp-content/uploads/2024/02/DotSlash_2.png?resize=96,94 96w, https://engineering.fb.com/wp-content/uploads/2024/02/DotSlash_2.png?resize=192,188 192w" sizes="auto, (max-width: 992px) 100vw, 62vw"/>See the Generating DotSlash Files at Meta documentation for details.</p>
<p><span style="font-weight: 400;">The script we use to generate DotSlash files injects metadata about the build job that makes it straightforward to trace the provenance of the underlying artifacts. The following is a hypothetical example of a generated DotSlash file for the </span><span style="font-weight: 400;">CodeCompose</span><span style="font-weight: 400;"> LSP built from source at a specific commit in clang-opt </span><span style="font-weight: 400;">mode. Note the &#8220;metadata&#8221; </span><span style="font-weight: 400;">entries in the DotSlash file will be ignored by the dotslash</span><span style="font-weight: 400;"> CLI, but we include them as structured data so they can be parsed by other tools to facilitate programmatic audits:</span></p>
<p>#!/usr/bin/env dotslash</p>
<p>// @generated SignedSource<<d8621e8ccbd7a595a3018e6a070be9c0>><br />
// https://yarnpkg.com/package?name=signedsource can be used to<br />
// generate and verify the above signature to flag tampering<br />
// in generated code.</p>
<p>{<br />
  &#8220;name&#8221;: &#8220;code-compose-lsp&#8221;,<br />
  // Added by automation.<br />
  &#8220;metadata&#8221;: {<br />
    &#8220;build-info&#8221;: {<br />
      &#8220;job-repo&#8221;: &#8220;fbsource&#8221;,<br />
      &#8220;job-src&#8221;: &#8220;dotslash/code-compose-lsp.star&#8221;,<br />
      // It is considered best practice to build the artifacts for<br />
      // all platforms from the same commit within a DotSlash file.<br />
      &#8220;commit&#8221;: {<br />
        &#8220;repo&#8221;: &#8220;fbsource&#8221;,<br />
        &#8220;scm&#8221;: &#8220;sapling&#8221;,<br />
        &#8220;hash&#8221;: &#8220;0f9e3d9e189bf393f7f9d0b6879361cd76fcdcd0&#8221;,<br />
        &#8220;date&#8221;: &#8220;2024-01-03 20:07:54 PST&#8221;,<br />
        &#8220;timestamp&#8221;: 1704341274<br />
      }<br />
    }<br />
  },<br />
  &#8220;platforms&#8221;: {<br />
    &#8220;linux-x86_64&#8221;: {<br />
      &#8220;size&#8221;: 2740736,<br />
      &#8220;hash&#8221;: &#8220;blake3&#8221;,<br />
      &#8220;digest&#8221;: &#8220;fc8a3ade56a97a6e73469ade1575e8f8e33fda99fbf6df429d555e480d6453d0&#8221;,<br />
      &#8220;format&#8221;: &#8220;zst&#8221;,<br />
      &#8220;providers&#8221;: [<br />
        {<br />
          &#8220;type&#8221;: &#8220;meta-cas&#8221;,<br />
          &#8220;key&#8221;: &#8220;fc8a3ade56a97a6e73469ade1575e8f8e33fda99fbf6df429d555e480d6453d0:2740736&#8221;<br />
        }<br />
      ]<br />
      // Added by automation.<br />
      &#8220;metadata&#8221;: {<br />
        &#8220;build-command&#8221;: [<br />
          &#8220;buck2&#8221;,<br />
          &#8220;build&#8221;,<br />
          &#8220;&#8211;config-file&#8221;,<br />
          &#8220;//buildconfig/clang-opt&#8221;,<br />
          &#8220;//codecompose/lsp/cli:code-compose-lsp&#8221;<br />
        ]<br />
      }<br />
    },<br />
    // additional platforms&#8230;<br />
  }<br />
}</p>
<p><span style="font-weight: 400;">Without DotSlash, a developer would have to run buck2 build &#8211;config-file //buildconfig/clang-opt //codecompose/lsp/cli:code-compose-lsp</span><span style="font-weight: 400;"> to build and run the LSP from source, which could be a slow operation depending on the size of the build, the state of the build cache, etc. With DotSlash, the developer can run the optimized LSP as quickly as they can fetch and decompress it from the specified URL, which is likely much faster than doing a build.</span></p>
<p><span style="font-weight: 400;">Another thing you may have noticed about this example is that the &#8220;key&#8221;</span><span style="font-weight: 400;"> is not an ordinary URL, but an identifier that happens to be the concatenation of the BLAKE3 hash and the size of the specified artifact. This is because &#8220;type&#8221;: &#8220;meta-cas&#8221; </span><span style="font-weight: 400;">indicates that this artifact must be fetched via a </span><span style="font-weight: 400;">custom provider</span><span style="font-weight: 400;"> in DotSlash, which is specialized fetching logic built into DotSlash that has its own identifier scheme. In this case, the artifact would be fetched from Meta’s in-house content-addressable storage (CAS) system, which uses the artifact hash+size as a key.</span></p>
<p><span style="font-weight: 400;">While we do not provide the code for the meta-cas</span><span style="font-weight: 400;"> provider in the open source version of DotSlash, we do include one custom provider out-of-the-box beyond the default http </span><span style="font-weight: 400;">provider.</span></p>
<h2><span style="font-weight: 400;">Using DotSlash with GitHub releases</span></h2>
<p><span style="font-weight: 400;">While DotSlash is generally useful for fetching an executable from an arbitrary URL and running it, we have found the combination of DotSlash and CI to be particularly powerful. To that end, we include custom tooling to facilitate generating DotSlash files for GitHub releases. To ensure DotSlash can fetch artifacts from private GitHub repositories as well as GitHub Enterprise instances, DotSlash includes a custom provider for GitHub releases that includes an appropriate authentication token when fetching artifacts.</span></p>
<p><span style="font-weight: 400;">For example, suppose you have existing workflows for building your release artifacts and publish them via gh release upload</span><span style="font-weight: 400;">. For simplicity, let’s assume these are named linux-release</span><span style="font-weight: 400;">, macos-release</span><span style="font-weight: 400;">, and windows-release</span><span style="font-weight: 400;">. To create a single DotSlash file that includes the artifacts from all three platforms you would introduce a new </span><span style="font-weight: 400;">GitHub Action</span><span style="font-weight: 400;"> that leverages the workflow_run</span><span style="font-weight: 400;"> trigger so it fires whenever one of these release workflows succeeds. (Note that </span><span style="font-weight: 400;">GitHub’s documentation states</span><span style="font-weight: 400;">: “You can’t use workflow_run</span><span style="font-weight: 400;"> to chain together more than three levels of workflows,” so check the depth of your workflow graph if your workflow is not firing.)</span></p>
<p><span style="font-weight: 400;">The .yml</span><span style="font-weight: 400;"> file to define the new workflow would look like this:</span></p>
<p>name: Generate DotSlash File</p>
<p>on:<br />
  workflow_run:<br />
    # These must match the names of the workflows that publish<br />
    # artifacts to your GitHub Release.<br />
    workflows: [linux-release, macos-release, windows-release]<br />
    types:<br />
      &#8211; completed</p>
<p>jobs:<br />
  create-dotslash-file:<br />
    name: Generating DotSlash File<br />
    runs-on: ubuntu-latest<br />
    if: ${{ github.event.workflow_run.conclusion == &#8216;success&#8217; }}<br />
    steps:<br />
      &#8211; uses: facebook/dotslash-publish-release@v1<br />
        env:<br />
          # This is necessary because the action uses<br />
          # `gh release upload` to publish the generated DotSlash file(s)<br />
          # as part of the release.<br />
          GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}<br />
        with:<br />
          # Additional file that lives in your repo that defines<br />
          # how your DotSlash file(s) should be generated.<br />
          config: .github/workflows/dotslash-config.json<br />
          # Tag for the release to to target.<br />
          tag: ${{ github.event.workflow_run.head_branch }}</p>
<p><span style="font-weight: 400;">Because </span><span style="font-weight: 400;">inputs to GitHub Actions</span><span style="font-weight: 400;"> are limited to string values, facebook/dotslash-publish-release</span><span style="font-weight: 400;"> takes config</span><span style="font-weight: 400;">, which is a path to a JSON file in the repo that supports a rich set of configuration options for generating the DotSlash files. The other required input is the ID of the release, which in GitHub, </span><span style="font-weight: 400;">is defined by a Git tag</span><span style="font-weight: 400;">. When the action is run, it will check to see whether all of the artifacts specified in the config are present in the release, and if so, will generate the appropriate DotSlash files and add them to the release.</span></p>
<p><span style="font-weight: 400;">For example, consider an open source project like </span><span style="font-weight: 400;">Hermes</span><span style="font-weight: 400;"> where a </span><span style="font-weight: 400;">release</span><span style="font-weight: 400;"> includes a number of platform-specific .tar.gz </span><span style="font-weight: 400;">files, each containing a handful of executables (hermes</span><span style="font-weight: 400;">, hdb</span><span style="font-weight: 400;">, etc.). To create a separate an individual DotSlash file for each executable, the JSON configuration for the action would be:</span></p>
<p>{<br />
  &#8220;outputs&#8221;: {</p>
<p>    &#8220;hermes&#8221;: {<br />
      &#8220;platforms&#8221;: {<br />
        &#8220;macos-x86_64&#8221;: {<br />
          &#8220;regex&#8221;: &#8220;^hermes-cli-darwin-&#8220;,<br />
          &#8220;path&#8221;: &#8220;hermes&#8221;<br />
        },<br />
        &#8220;macos-aarch64&#8221;: {<br />
          &#8220;regex&#8221;: &#8220;^hermes-cli-darwin-&#8220;,<br />
          &#8220;path&#8221;: &#8220;hermes&#8221;<br />
        },<br />
        &#8220;linux-x86_64&#8221;: {<br />
          &#8220;regex&#8221;: &#8220;^hermes-cli-linux-&#8220;,<br />
          &#8220;path&#8221;: &#8220;hermes&#8221;<br />
        },<br />
        &#8220;windows-x86_64&#8221;: {<br />
          &#8220;regex&#8221;: &#8220;^hermes-cli-windows-&#8220;,<br />
          &#8220;path&#8221;: &#8220;hermes.exe&#8221;<br />
        }<br />
      }<br />
    },</p>
<p>    &#8220;hdb&#8221;: {<br />
      &#8220;platforms&#8221;: {<br />
        &#8220;macos-x86_64&#8221;: {<br />
          &#8220;regex&#8221;: &#8220;^hermes-cli-darwin-&#8220;,<br />
          &#8220;path&#8221;: &#8220;hdb&#8221;<br />
        },<br />
        &#8220;macos-aarch64&#8221;: {<br />
          &#8220;regex&#8221;: &#8220;^hermes-cli-darwin-&#8220;,<br />
          &#8220;path&#8221;: &#8220;hdb&#8221;<br />
        },<br />
        &#8220;linux-x86_64&#8221;: {<br />
          &#8220;regex&#8221;: &#8220;^hermes-cli-linux-&#8220;,<br />
          &#8220;path&#8221;: &#8220;hdb&#8221;<br />
        },<br />
        &#8220;windows-x86_64&#8221;: {<br />
          &#8220;regex&#8221;: &#8220;^hermes-cli-windows-&#8220;,<br />
          &#8220;path&#8221;: &#8220;hdb.exe&#8221;<br />
        }<br />
      }<br />
    },</p>
<p>    // Additional entries for hvm, hbcdump, and hermesc&#8230;</p>
<p>  }<br />
}&#8217;</p>
<p><span style="font-weight: 400;">Each entry in &#8220;outputs&#8221;</span><span style="font-weight: 400;"> corresponds to the name of a DotSlash file that will be added to the release. The &#8220;platforms&#8221;</span><span style="font-weight: 400;"> for each entry defines the &#8220;platforms&#8221;</span><span style="font-weight: 400;"> that should be present in the generated DotSlash file. The action uses the &#8220;regex&#8221;</span><span style="font-weight: 400;"> to identify the file in the GitHub release that should be used as the backing artifact for the entry. Assuming the artifact is an “archive” of some sort (.tar.gz</span><span style="font-weight: 400;">, .tar.zst</span><span style="font-weight: 400;">, etc.), the &#8220;path&#8221;</span><span style="font-weight: 400;"> indicates the path within the archive that the DotSlash file should run.</span></p>
<p><span style="font-weight: 400;">In this particular case, Hermes does not provide an ARM-specific binary for macOS, so the &#8220;macos-aarch64&#8221;</span><span style="font-weight: 400;"> entry is the same as the &#8220;macos-x86_64&#8221;</span><span style="font-weight: 400;">one. Though if that changes in the future, a simple update to &#8220;regex&#8221;</span><span style="font-weight: 400;"> to distinguish the two binaries is all that is needed.</span></p>
<p><span style="font-weight: 400;">Note that the action will take responsibility for computing the digest for each binary. In this example, the resulting DotSlash file for hermes </span><span style="font-weight: 400;">would be:</span></p>
<p>#!/usr/bin/env dotslash</p>
<p>{<br />
  &#8220;name&#8221;: &#8220;hermes&#8221;,<br />
  &#8220;platforms&#8221;: {<br />
    &#8220;linux-x86_64&#8221;: {<br />
      &#8220;size&#8221;: 47099598,<br />
      &#8220;hash&#8221;: &#8220;blake3&#8221;,<br />
      &#8220;digest&#8221;: &#8220;8d2c1bcefc2ce6e278167495810c2437e8050780ebb4da567811f1d754ad198c&#8221;,<br />
      &#8220;format&#8221;: &#8220;tar.gz&#8221;,<br />
      &#8220;path&#8221;: &#8220;hermes&#8221;,<br />
      &#8220;providers&#8221;: [<br />
        {<br />
          &#8220;url&#8221;: &#8220;https://github.com/facebook/hermes/releases/download/v0.12.0/hermes-cli-linux-v0.12.0.tar.gz&#8221;<br />
        },<br />
        {<br />
          &#8220;type&#8221;: &#8220;github-release&#8221;,<br />
          &#8220;repo&#8221;: &#8220;facebook/hermes&#8221;,<br />
          &#8220;tag&#8221;: &#8220;v0.12.0&#8221;,<br />
          &#8220;name&#8221;: &#8220;hermes-cli-linux-v0.12.0.tar.gz&#8221;<br />
        }<br />
      ],<br />
    },<br />
    // additional platforms&#8230;<br />
  }<br />
}</p>
<p><span style="font-weight: 400;">Note that there are two entries in the &#8220;providers&#8221;</span><span style="font-weight: 400;"> section for the Linux artifact. When DotSlash fetches an artifact, it will try the providers in order until one succeeds. Regardless of which provider is used, the downloaded binary will be verified against the specified &#8220;hash&#8221;</span><span style="font-weight: 400;">, &#8220;digest&#8221;</span><span style="font-weight: 400;">,  and &#8220;size&#8221;</span><span style="font-weight: 400;"> values.</span></p>
<p><span style="font-weight: 400;">In this case, the first provider is an ordinary, public URL that can be fetched using curl &#8211;location</span><span style="font-weight: 400;">, but the second is an example of a </span><span style="font-weight: 400;">custom provider</span><span style="font-weight: 400;"> discussed earlier. The &#8220;type&#8221;: &#8220;github-release&#8221; </span><span style="font-weight: 400;">line indicates that the </span><span style="font-weight: 400;">GitHub</span><span style="font-weight: 400;"> provider for DotSlash should be used, which shells out to the </span><span style="font-weight: 400;">GitHub CLI</span><span style="font-weight: 400;"> (gh</span><span style="font-weight: 400;">, which must be installed separately from DotSlash) to fetch the artifact instead of </span><span style="font-weight: 400;">curl</span><span style="font-weight: 400;">. Because facebook/hermes</span><span style="font-weight: 400;"> is a public GitHub repository, the first provider should be sufficient here. However, if the repository were private and the fetch required authentication, we would expect the first provider to fail and DotSlash would fallback to the GitHub provider. Assuming the user had run gh auth login </span><span style="font-weight: 400;">in advance to configure credentials for the specified repo, DotSlash would be able to fetch the artifact using gh release download</span><span style="font-weight: 400;">.</span></p>
<p><span style="font-weight: 400;">By publishing DotSlash files as part of GitHub releases, users can copy them to their own repositories to “vendor in” a specific version of your tool with minimal effect on their repository size, regardless of how large your releases might be.</span></p>
<h2><span style="font-weight: 400;">Try DotSlash Today </span></h2>
<p><span style="font-weight: 400;">Visit the </span><span style="font-weight: 400;">DotSlash site for</span><span style="font-weight: 400;"> more in-depth documentation and technical details. The site includes instructions on </span><span style="font-weight: 400;">Installing DotSlash</span><span style="font-weight: 400;"> so you can start playing with it firsthand. </span></p>
<p><span style="font-weight: 400;">We also encourage you to </span><span style="font-weight: 400;">check out the DotSlash source code</span><span style="font-weight: 400;"> and provide feedback via </span><span style="font-weight: 400;">GitHub issues</span><span style="font-weight: 400;">. We look forward to hearing from you!</span></p>The post <a href="https://dailyzsocialmedianews.com/dotslash-simplified-executable-deployment-engineering-at-meta/">DotSlash: Simplified executable deployment – Engineering at Meta</a> first appeared on <a href="https://dailyzsocialmedianews.com">DAILY ZSOCIAL MEDIA NEWS</a>.]]></content:encoded>
					
		
		
			</item>
	</channel>
</rss>
