Over the past 18 months I’ve been working on making Ophal perform well at the front-end and have a minimal server-side load, making it quickly evolve as a playground framework (i.e: MeQuejo.PE!, Zophin and develCuy’s blog search engine). Now is time to work on more complex stuff that involves the 3 functions left before a first Ophal beta release:

  • Browser’s cache support (partially implemented)
  • Session handler (under development)
  • File Uploads (a long history above)

Browser’s cache

I’ve been amazed by the “Caching Tutorial for Web Authors and Webmasters”, that demonstrates the potential of efficient browser’s cache handling, it also makes me think on how much room is still left for using the web efficiently without breaking the standards. My vision is to make Ophal take advantage of browsers, will comment a bit more later.

Session handler

This is the central point of this post, current Ophal HEAD has a very incomplete session handler, so lets elaborate a bit more. First we need a worst case scenario, say we have a newspaper, we will make it hard to dealt with for our calculations:

  • Unique visitors: 2 million in a single day, (each one will have its own session)
  • Avg. page views per visitor: 5, that makes 10 million impressions in the same single day,
  • Comments to articles: 500 thousand,
  • Context tracking: some, enough to customize some blocks of content according to visitors behavior
  • Infrastructure: One single (heroic) CGI app server, a SQLite3 file database, dual core 2GHz CPU and 2GB of RAM, 500GB hard drive (5400rpm speed)

For instance, a single day means 86,400 seconds or 86,400,000 milliseconds. How much work should Ophal do in a single second in order to survive and even defeat such scenario?

  1. Scale high! breaking news might produce flash crowd that will break our server, then we should be ready for the Slashdot effect and the Internet prime time happening at the same time (with good luck!). That is about 900 thousand more visitors plus 20% more traffic than regular hours, a rounded total of 30% extra pain. Further calculations will use range format, by example: 100-130 means regular-(regular+30%).
  2. Relay on I/O! writes are the first bottleneck, having to create 24-32 session records every single second. Also, due to the amount of page views and context tracking, session handler should be able to update a given session at least 5-7 times (the first “update” is the creation of the actual session). In the worst case, (32 sessions) * (7 pageviews) is 224 reads and writes per second, because Ophal needs to read the session record and then store the updated version before sending any response, let’s stay focus on writes.
  3. Have room for more writes! Ophal beta will be an minimal CMS, “just” a CRUD of content. It means that we still need to measure that activity, our server would need an extra hard drive to store the content.
  4. “Bet Node.js!” that is not a humble statement, I know ;), seen Spludo’s? It says: “600-4000 dynamic requests/second with one single 3Ghz core.”, let’s set Spludo as our first rival, hence Ophal still needs to walk a long way before being able to be compared with anything else :(

NOTICE! This calculations are just preliminary, in short it is “defective by design”, any feedback welcome.

Session handler Approaches

Now that we have a number to work with (224 writes per second), we should remember that Lua is written and inherits the “limitations” of standard C. Also, Ophal runs in CGI, so there is not a single process handling all the load, so it is expensive to open a connection to an RDBMS. I’m tempted to say that we only have 3 viable approaches:

  • Write our very own filesystem based session handler (with some clumsy rename algorithm)
  • Relay on SQLite3 and trust that our worst case scenario is easy job for it
  • Have a memory efficient NoSQL server take care of sessions

Both of them need to be measured, so next alpha release of Ophal should have all of them implemented, time and statistics will give advice.

File uploads

Forget about our worst case scenario, Cloud-computing and Mobile are moving personal data to the Internet, massively! Our newspaper needs to be “huge files friendly”, everybody has the right to upload 1GB videos and its entire 8GB album of recent photos, in parallel, specially our reporters! Let’s bet Node.js event driven approach and make Ophal robust at this very critic feature of file uploads, whatever close to that in alpha stage is a good challenge :S


Past two weeks were really interesting, would like to say thanks to the folks at ##c and #lua in Freenode’s IRC, there is really clever people that quickly found Ophal’s architecture weaknesses, even more considering that Ophal is a humble CGI script, still doing its first baby steps.