Category: Deploying monorepo

Deploying monorepo

For the past 8 years, Netflix has been building and evolving a robust microservice architecture in AWS. Throughout this evolution, we learned how to build reliable, performant services in AWS. Our microservice architecture decouples engineering teams from each other, allowing them to build, test and deploy their services as often as they want. This flexibility enables teams to maximize their delivery velocity.

Velocity and reliability are paramount design considerations for any solution at Netflix. As supported by our architecture, microservices provide their consumers with a client library that handles all of the IPC logic. This provides a number of benefits to both the service owner and the consumers.

In addition to the consumption of client libraries, the majority of microservices are built on top of our runtime platform framework, which is composed of internal and open source libraries. While service teams do have the flexibility to release as they please, their velocity can often be hampered by updates to any of the libraries they depend on.

An upcoming product feature may require a number of microservices to pick up the latest version of a shared library or client library. Updating dependency versions carries risk. Or put simply, managing dependencies is hard. To address the challenges of managing dependencies at scale, we have observed companies moving towards two approaches: Share little and monorepos.

While both approaches address the problems of managing dependencies at scale, they also impose certain challenges. The share little approach favors decoupling and engineering velocity, while sacrificing code reuse and consistency. The monorepo approach favors consistency and risk reduction, while sacrificing freedom by requiring gates to deploying changes.

deploying monorepo

Adopting either approach would entail significant changes to our development infrastructure and runtime architecture. Additionally, both solutions would challenge our culture of Freedom and Responsibility.

Efficient building a monorepo on Azure DevOps

Can we provide engineers at Netflix the benefits of a monorepo and still maintaining the flexibility of distributed repositories? Using the monorepo as our requirements specification, we began exploring alternative approaches to achieving the same benefits.

What are the core problems that a monorepo approach strives to solve? Can we develop a solution that works within the confines of a traditional binary integration world, where code is shared? Our approach, while still experimental, can be distilled into three key features:. We are just starting our journey. Our publisher feedback service is currently being alpha tested by a number of service teams and we plan to broaden adoption soon, with managed source not far behind.

Our initial experiments with distributed refactoring have helped us understand how best to rapidly change code globally. We also see an opportunity to reduce the size of the overall dependency graph by leveraging tools we build in this space. We believe that expanding and cultivating this capability will allow teams at Netflix to achieve true organization-wide continuous integration and reduce, if not eliminate, the pain of managing dependencies.

If this challenge is of interest to you, we are actively hiring for this team.

Deploying Monorepo Apps

You can apply using one of the links below:. Sign in. Towards true continuous integration: distributed repositories and dependencies.With AWS Lambda, we can deploy and scale individual functions. However, we engineers still like to think in terms of services and maintain a mapping between business capabilities and service boundaries. The service level abstraction makes it easier for us to reason about large systems.

As such, cohesive functions that work together to serve a business feature are grouped together and deployed as a unit i. This makes sense when you consider that engineers are typically organized into teams. Each team would own one or more services. The service boundaries allow the teams to be autonomous and empowers them to choose their own technology stacks.

The fact that the functions that make up a service are scaled independently is merely an implementation detail, albeit one that can work very well to our advantage. In a future post, we will compare the various deployment frameworks on the market today. In this post, we will address the question of how you organize your functions into repos.

We will compare two common approaches:. We will then offer some advice on choosing which approach you should consider for your organization. This topic is not specific to serverless, or any technology for that matter.

On both side of the fence you will find organizations using the same stack — JVM. Net, containers, AWS Lambda, on-premise, and so on. Before we delve into it, I should clarify that within a monorepo you can still deploy services independently. You simply cannot fit every project into a single CloudFormation template given the res o urces limit. Instead, functions are still organized into services, and each service would reside within a subfolder in the repo. This is the approach employed by many of the big tech companies — Google, Facebook and Twitter to name a few.

Outside that rarefied circle, this approach is often frowned upon or even ridiculed.By using our site, you acknowledge that you have read and understand our Cookie PolicyPrivacy Policyand our Terms of Service. The dark mode beta is finally here. Change your preferences any time.

Stack Overflow for Teams is a private, secure spot for you and your coworkers to find and share information. I can successfully deploy the "web" aspect to Firebase Hosting, which references a shared "core" workspace. However, attempts to do the same with the "functions" workspace on Firebase Functions fails. In an attempt to enable Yarn workspaces in Firebase and therefore share my core package I've used the nohoist feature to create symlinks to the core workspace in functions and web as per twiz 's Stackoverflow answer.

The core package also exists as a dependency in functions and web :. There are no problems when any of this runs locally, and in fact deployment of the web package to Firebase hosting works fine. However, deployment of the functions package to Firebase functions throws an error:.

I can get the functions package to deploy if I yarn pack the core workspace and reference it as a tarball in the package. Learn more. Deploying to Firebase Functions with a monorepo Ask Question.

Asked 9 days ago. Active 9 days ago. Viewed 34 times. Any advice would be appreciated to resolve this issue. Craig Myles Craig Myles 2, 2 2 gold badges 26 26 silver badges 25 25 bronze badges. Active Oldest Votes.

Sign up or log in Sign up using Google. Sign up using Facebook. Sign up using Email and Password. Post as a guest Name. Email Required, but never shown. The Overflow Blog. The Overflow How many jobs can be done at home? Featured on Meta.

Why monorepo’s are cool, but probably not for you - Git Series 5

Community and Moderator guidelines for escalating issues via new response…. Feedback on Q2 Community Roadmap. Triage needs to be fixed urgently, and users need to be notified upon…. Dark Mode Beta - help us root out low-contrast and un-converted bits. Technical site integration observational experiment live on Stack Overflow.

Linked Related Hot Network Questions. Question feed. Stack Overflow works best with JavaScript enabled.When I talk to friends and relatives about what I do at Etsy, I have to come up with an analogy about what Frontend Infrastructure is. The analogy that I usually fall to is that of a restaurant: the meal is a fully formed web page, the chefs are product engineers, and the kitchen is the infrastructure.

A good kitchen should make it easy to cook a bunch of different meals quickly and deliciously. Recently, my team and I spent over a year swapping out our home-grown, Require-js-based JavaScript build system for Webpack. Running with this analogy a bit, this project is like trading out our kitchen without customers noticing, and without bothering the chefs too much. Large projects tend to be full of unique problems and unexpected hurdles, and this one was no exception.

This post is the second in a short series on all the things that we learned during the migration, and is adapted in part from a talk I gave at JSConf The first post can be found here.

At Etsy, we have a whole lot of JavaScript. When we deploy our web code, we need to build and deploy over different JavaScript assets made up from over twelve thousand different JavaScript files or modules. When starting to adopt Webpack, one of the first places we saw an early win was in our development experience.

Up to and until this point, our engineers had been using a development server that we had written in-house. We ran a copy of it on every developer machine, where it built files as they were requested.

This approach meant that you could reliably navigate around Etsy.

References elsewhere

It also meant that we could start and restart an instance of the development server without worrying about losing state or interrupting developers much. Conceptually, this made things very simple to maintain. In practice, however, developers were asking for more from JavaScript and from their build systems. We started adopting React a few years prior using the then-available JSXTransform tool, which we added to our build system with a fair amount of wailing and gnashing of teeth.

The result was a server that successfully, yet sluggishly, supported JSX. Building some of our weightier JavaScript code often took the better part of a minute, and most of our developers grew increasingly frustrated with the long iteration cycles it produced. Clearly, there was a lot with our development environment that could be improved. To be worth the effort of adopting, any new build system we adopted would at least have to support the ability to transpile syntaxes like JSX, while still allowing for fast rebuild times for developers.

deploying monorepo

Webpack seemed like a pretty safe bet — it was widely adopted ; it was actively developed and funded ; and everyone who had experience with it seemed to like it in spite of its intimidating configuration. So, we spent a good bit of time configuring Webpack to work with our codebase and vice versa.

This involved writing some custom loaders for things like templates and translations, and it meant updating some of the older parts of our codebase that relied on the specifics of Require. After a lot of planning, testing, and editing, we were able to get Webpack to fully build our entire codebase.Google has the most famous monorepo and they do the above AND force teams to share code at source level instead of linking in previously built binaries.

Third-party libraries like JUnit will be checked into the repo with a specific version number like 4.

deploying monorepo

Google and Facebook are the most famous organizations that rest development on a single company-wide trunk, that fits the monorepo design. Netflix and Uber iOS application disclosed in that they do too. With the monorepo model, there is a strong desire to have third-party binaries in source-control too.

You might think that it would be unmanageable for reasons of size. In terms of commit history, Perforce and Subversion do not mind a terabyte of history of binary files or moreand Git performed much better when Git-LFS was created. You could still feel that the HEAD revision of thousands of fine-grained dependencies is too much for a workstation, but that can be managed via an expanding and contracting monorepo.

It could be that your application team depends on something that is made by colleagues from a different team.

deploying monorepo

For monorepo teams there is a strong wish to depend on the source of that ORM technology and not a binary. Google has Blaze internally.

Ex-Googlers at Facebook with newfound friends missed that, wrote Buck and then open-sourced it. Google then open-sourced a cut-down Blaze as Bazel. There is also the ability to depend on recently compiled object code of colleagues.

That is in place to shorten compile times for prod and test code. Maven also traverses its tree in a strict depth first then breadth manner. Most recursive build systems can be configured to pull third-party dependencies from a relative directory in the monorepo. A binary dependency cache outside of the VCS controlled working copy, is more normal.

Recursive build systems mostly have the ability to choose a type of build. For in-house dependencies, where the source is in the same monorepo, then you will not have this situation, as the team that first wanted the increased functionality, performed it for all teams, keeping everyone at HEAD revision of it.

The concept of version number disappears in this model. For third-party dependencies, the same rule applies, everyone upgrades in lock-step. Problems can ensue, of course, if there are real reasons for team B to not upgrade and team A was insistent.

Broken backward compatibility is one problem. InGoogle tried to upgrade their JUnit from 3. The change-set became very large, and struggled to keep up with the rate developers were adding tests. Because you are doing lock-step upgrades, you only secondarily note the version of the third-party dependencies, as you check them into source control without version numbers in the filename.

Above we contrasted directed graph and recursive build systems.Adding your app. Adding your organization. Adding monorepo apps. Setting up your pipeline.

Managing environments. Deploying with Git. Manual deployments.

Build, Test, and Deploy Apps Independently From a Monorepo

Promoting and rolling back. Managing your app. Monitoring your app. Configuring Serverless. A monorepo Serverless app is one where multiple Serverless services are in the same repo. This means that a commit will trigger a deployment to all the services in Seed. However, this can be a slow process if you are only trying to deploy a change to a single service. There are two things that Seed does to fix this:. This two step process works in tandem to only deploy the services that have been updated.

This greatly speeds up your builds and also makes deployments cost-effective. Seed uses a simple algorithm to determine if a service in your app needs to be deployed.

The algorithm is based on the directory structure of your Serverless app. Your app only has one service. In this case Seed will always run the deployment process for your service. Your app has multiple services and they are all in a directory on the same level. So it might look something like this:.

The algorithm used for this is fairly straight forward. To determine if Seed should deploy a service:. Your app has multiple services but they are nested inside each other. There are two variants of this.ZEIT is the optimal workflow for frontend teams. Sure thing! Deploy Free. Contact Sales. Takes 15 seconds. Proudly Serving Amazing Companies. Push to Git and your website is live. Zero configuration required.

Import a Project. Deploy an Example. New Project Import existing acme. Acme nextjs-site. Want to create a project from an example instead? See all examples.

Works with your favorite frontend frameworks. Better together Push to deploy and preview. Merge to go live. An intuitive workflow that makes collaboration easy with the whole team. Collaborative Gather feedback early in the development cycle without leaving the browser. Reliable Release with confidence by running checks against each deployment. Instantly revert.

Integrates with your git provider. Beyond static with Jamstack Static, meet Dynamic. Generate blazing fast pages and augment them with rich JavaScript that brings your apps alive. Blazing fast builds The fastest build system in the market. Don't waste time staring at CI and logs. Dynamic re-builds Trigger instant re-builds of your site with deploy hooks.

Perfect for working with a CMS. Incremental generation Why re-build the whole thing, when you can re-build one page at a time? Works with your stack.

Ship directly to the edge Always fast. Always online. Always a hit. We power the most ambitious. Integral Entertainment. Flowkit Design. Mural Collaboration.


About Author


Mulkis

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *