Contact Us

Does Laravel Scale?

Mobile App | June 19, 2022

In this blog post, I will explore whether you can use Laravel at hyper-scale and whether it could be used to power something like Twitter, Facebook or various other huge applications.

What brought me here

We’re all getting tired of the “Does Laravel scale?” questions. Not because people are asking questions but because of the ignorant responses to the question. It has happened numerous times now, and the storyline is always the same.

In this post, I’m going to address this question once and for all: Does Laravel scale?

Why are you worried?

Before we get into details about whether Laravel can scale, you must understand that most people are worried about a situation they will never experience. You are not building the next Google, Facebook or YouTube. I don’t write that as an insult, and I’m not a pessimist (I’m an entrepreneur who quit his job to work on a startup!), but it’s mentally healthy for us to keep our expectations based on reality.

So when people ask the question, “Does Laravel scale?” because they’re starting a company or about to build an application for a multi-billion dollar company, they need to realize that they just won’t hit the scale of Facebook.

How many requests does Wikipedia handle?

Wikipedia is one of the biggest websites in the world, and, guess what, it runs on PHP. According to a TechCrunch article from 2020, Wikipedia was processing 255 million pageviews per day on average. So how does that look per second and per month?

Do we consider Wikipedia to be a big website?

How many requests does Facebook handle?

They state in a 2010 blog post that they were processing 100 billion “hits” per day at 500 million users. That’s gigantic. The word “hit” is vague and could correspond to anything, but let’s assume it’s web requests. Let’s see what it looks like per second and per month:

So that’s a staggering amount of traffic, beyond comprehension. And that was in 2010!

Could Laravel be scaled to handle 3 trillion requests per month? Sure.

Would you likely bring in other frameworks for certain parts of the app that made sense? Yes.

Will we ever be architecting an application processing 3 trillion requests per month? Statistically, no. Again, that’s not me being a pessimist.

Realistically, you’re going to be scaling up to between 1 million requests and 100 billion requests each month. Everyone has a different idea about a “big” application, so I’ve given a huge range here.

Now we’ve addressed the fact that we’re worrying about something that statistically will never happen, let’s move on to scaling Laravel in the real world.

Your benchmarks are nonsense

Benchmarks are constantly thrown around whenever you discuss whether something can scale. You’ll end up seeing some random framework at the top of the list that nobody has ever heard of, has awful developer experience, zero community and an awful ecosystem. But because it can make many requests on a single server per second, it’s apparently the best.

Despite my eagerness to debate whether Laravel could run at 3 trillion requests per month, I will concede that once you reach a certain point where costs are through the roof and you’re looking for something more budget-friendly at scale, you will probably switch from Laravel to something in C++ or Rust, at least for parts of the application. But what kind of traffic would you need to be doing to switch?

Recently, someone on Twitter linked to TechEmpower Framework Benchmarks. Laravel sits at position #388 with 4,833 requests per second performance compared with the #1 position held by drogon-core achieving 666,737 req/s. 666, eh?

Well, wait a minute, that means drogon-core can perform 1,752,184,836,000 requests per month. And in 2010, Facebook did 3,000,000,000,000 requests per month…

3,000,000,000,000 / 1,752,184,836,000 = 1.71

Hey, someone should tell Facebook they should move all of their code to drogon-core, and they’ll be able to power their entire application layer across two web servers. They’ll be so pleased to hear the news.

My point is that these benchmarks are worthless when you’re having a conversation about whether Laravel can scale. You look at this Laravel benchmark in the above results, and you think, “huh, something seems off” then you scroll down and see the following note:

In this test, the framework’s ORM is used to fetch all rows from a database table containing an unknown number of Unix fortune cookie messages (the table has 12 rows, but the code cannot have foreknowledge of the table’s size)

Well, duh, Laravel doesn’t use connection pooling out of the box, no persistent connections, and each of those requests is:

They’re using PHP FPM…
What about Laravel Octane?
What about persistent connections?
What about some kind of connection pooling layer?

These benchmarks are just absolute nonsense. Why does anybody take them seriously? And who produced the idea that generic benchmarks were this authority that mattered? Because they’re not. I conceded above that, at scale, you’d probably want something faster than PHP, but for 99.99994% of businesses (and more I’d guess), you’re going to smash it with Laravel. And you’re going to benefit from excellent documentation, an incredible community, and an ecosystem Mark Zuckerberg would’ve killed for when he first built Facebook.

When we started seeing incredible growth in our software, Fathom Analytics (which is built on Laravel), our problems were all database-related. We never had moments of “does the framework do enough requests per second?”. And even then, we’re an outlier. Most people aren’t doing as much traffic as we are, so they’re not even worried about optimizing the HTTP layer and are running comfortably with a relatively simple setup.

What’s also funny is when people throw around the term “enterprise” to try and discredit Laravel. Suddenly, because a business is doing millions/billions of dollars in revenue, apparently Laravel isn’t fit for the course? Nonsense.

I’ve worked with enterprise companies using Laravel to power their entire business, and companies such as Twitch, Disney, New York Times, WWE and Warner Bros are using Laravel for various projects they run.

Laravel can handle your application at scale.

What should I be worried about?

Now that I’ve addressed some of the broken mental models some of us are guilty of, I want to get into the nitty-gritty of scaling a Laravel application. Before I co-founded a website analytics platform, I never had any experience with scaling applications. My experience up until running this business had been building applications for 100 – 10,000 users, and I’d certainly never had to worry about dealing with billions of page views.

So if Laravel isn’t the area we should be worried about when it comes to scaling, what is?

Database, cache and sessions.

Your database is going to be the bottleneck as you scale. I’m assuming a traditional MySQL/PostgreSQL setup here. You’re not going to run into problems if you’re using DynamoDB or SingleStore from the start, as these solutions are built for gigantic scale with minimal configuration. Anyway, database performance is a whole career, and many things can be tweaked outside of the queries you write.

Our database journey was:

We used Redis as our cache for a few years and then moved to DynamoDB (so we didn’t have to worry about sizing Redis or NAT gateways on AWS). Nowadays, we use SingleStore’s in-memory tables for caching because it’s rapid.

We only worked with managed services because we are not database engineers. If you’re a database engineer, you’ll have no trouble scaling your database, and you already know how to scale it horizontally, tweak it, etc. But for the rest of us, be ready for database headaches as you grow.

Queue system

The Laravel queue system has multiple drivers such as Amazon SQS, Redis, database, etc. The purpose of the queue system, as you know, is to offload time-consuming jobs into the background to ensure fast page response time for your users.

We are big fans of SQS because it has unlimited throughput and excellent security, and it stores jobs across multiple availability zones. Now that we use SingleStore for our database, they could power our queue, but we like that our queue system is separate from the database (for fault tolerance reasons).

One of my concerns, when we used Redis for our queue was, “what happens if we had a queue backlog, Redis filled up, and we ran out of space.” And that was a legitimate concern because we came close in the early days. In addition to that, there’s a bunch of stuff you can do to scale Redis, which most people won’t get to in their careers, but it is possible. You can configure Redis to run at scale on Laravel queues, and many people have. Or you can use SQS or SingleStore. Easy.

Other services

If you were using PHP-FPM, you’d want to watch that, load test it, and configure it in the most optimal way possible. But away from that, your “external services,” such as your database, queue system, etc., will become your primary bottleneck before PHP-FPM does (most times, I’m sure there’ll be exceptions, and I’m sure the reply guys will let me know). In addition to the above, don’t forget the following:

Writing code for scale

When running at scale, you need to think about things you might not have thought about before. I’ll share a few thoughts on this, but this isn’t a comprehensive coding guide 🙂

One of my earliest experiences of a “WTF” moment was when I used database transactions for inserts, and I would run into deadlock errors over the auto-increment column in tables. Long story short, I effectively had multiple, multi-query, multi-table transactions fighting to get a lock on auto increment to write a row. The initial solution to this problem, which worked well, was to have the transaction retry 50 times, but that’s not going to scale and is highly inefficient.

In the end, we moved away from using transactions, and we fixed the problem. But my point is that you won’t notice this issue at a low scale. But once things start ramping up, you’ll start seeing things like this.

So with that said, here are a few thoughts I have on writing code for scale:

How we deploy for hyper-scale

Now that I’ve covered areas to “worry” about and a few tips on writing code at scale, I will talk about how we scale our application. We run a privacy-first analytics service, and our traffic is the aggregate of all of our customers’ website traffic. So if we land ten new websites as customers and they each process 50 million website requests each month, our analytics service needs to process an extra 500 million requests per month. As we start to get hundreds of thousands of websites using our service, you can see why we are obsessed with scaling our service.

Before I get into how we deploy things, I should be very clear that you can scale Laravel with EC2 (with autoscaling), ECS, Heroku, DigitalOcean, etc. We don’t use those services for Fathom, so I’m not going to cover them. I will talk specifically about how we set up our infrastructure to handle hyper-scale as we grow rapidly.

As a lot of you will know, I am a big fan of serverless infrastructure. I teach the Serverless Laravel course, where I have taught over a thousand people how to run their applications at scale without worrying about managing servers. That course came after we deployed our software to Laravel Vapor because I fell in love.

I’m going to cover our analytics collector application, not our dashboard. Although our dashboard receives millions of requests each month, our analytics collector (separate deployment) is the hyper-scale part of our business.

Firstly, we use a CDN as the core entry point to our infrastructure. We use bunny.net, which processes just under 1 trillion requests each month across all of its customers. They comfortably handle our traffic, provide us with basic security settings (rate limiting, etc.) and allows us to route traffic as we need (e.g. routing EU traffic to our EU infrastructure via EU isolation).

Once it gets through Bunny, it’s routed to one of the following places.

The proxy hits the same point that our “outside EU” traffic hits: our Application Load Balancer (ALB), a managed load balancer service by Amazon that can handle incredible throughput. Of course, you could roll this yourself if you have the expertise (or can hire the expertise), but we are big fans of managed services. Before the traffic gets routed past the ALB, it passes through our Web Application Firewall (WAF), which rate limits and protects against malicious traffic.

Once it’s passed through the WAF & ALB, it heads to Lambda. We have a throughput limit of 60,000 requests per second burst and then we can push beyond that. At the time of writing, this equates to roughly 157 billion requests per month.

Lambda passes the request to SQS, which triggers another Lambda to process the request. We plan to replace SQS with a direct database write soon (to save money and improve performance), but we’ll still keep SQS as an availability backup. We wouldn’t dream of doing this if we were running RDS for MySQL, but we’re now running a (database built for hyper-scale).

Once it’s in Lambda being processed, we hit our database a few times (via key/value lookup in memory) for some spam evaluation (we call this our 3rd firewall) and then we run a series of reads & inserts. And this runs beautifully at scale because we’re using the correct database for the job.

One of our most significant issues with scaling was the open/close of the database connection, as this is expensive. Previously, every incoming pageview would open a database connection, bootstrap the framework, run the queries, and close the connection, which wasn’t scalable. But we now have Laravel Octane, which will hold open connections in memory. That means we can open up a database connection on container boot, and subsequent requests will use that same connection. And, yes, it’s so much faster, and our database CPU loves us for doing this.

Is Laravel good for big projects?

I get it. You’re sitting in a meeting with management, and management is concerned about whether Laravel can scale. Or perhaps you’re worried about whether your side project or new business will take off and Laravel will fall over? Well, here’s the stack I would use if I was starting a new project right now and management said we could get billions of requests per month:

And you’ll be good to go. So when your management says, “Hey, we need to go international, and we need servers in Asia,” you can move to use Global Accelerator. You’d deploy multiple Vapor regions and put them behind it. Then you’d talk to SingleStore about cross-region replication (we don’t replicate our database across regions, but I know they have customers who do).

The end

The conclusion is that Laravel is a fantastic choice for 99.99994%+ of web applications.

But if Mark Zuckerberg emailed you right now and said:

“Hey, just read the Facebook chat messages between you and your partner; you seem exciting and adventurous. Would you rebuild Facebook for us?”

Could you use Laravel to power Facebook? Yes.

Jack Ellis is a software engineer and the co-founder of Fathom Analytics. He’s also the co-host of Above Board and teaches the Serverless Laravel course.

Posted in
technical

This content was originally published here.