Category Archives: Uncategorized

$7K GLASS upgrade now free

Well, basically, if you don’t need commercial support.

At ESUG, the GemStone/S team announced that the free web edition of GemStone/S has had its limits raised. Maximum repository size is now 16GB (up from 4), shared page cache is 2GB (up from 1), CPU limit is 2 (up from 1), and the limit on the total number of objects in the repository has been eliminated. So if it was possible to fit ’em in 16GB, you could keep track of 2^40 objects (approx. 1 trillion).

You can upgrade an existing GLASS installation using the updated key files available here.

It boggles my mind that this sort of technology is available for free. Our projects are almost exclusively GLASS-based now, and all of them are running well within the limits of the new free license. Performance is great, customers are happy, and our developers love it. Kudos to GemStone for a great product!


1 Comment

Filed under Uncategorized

My Favorite GLASS Front-End Server: lighttpd

A while ago, Monty Williams at GemStone took an informal poll of folks on the GLASS mailing list to see what front-end web servers people were using. When the votes — at least those that responded publicly — Apache won, followed closely by lighttpd. Cherokee, nginx, and Swazoo came in tied for third.

I didn’t participate at the time because I hadn’t made up my mind. We had test deployments running on both Apache and nginx, all using FastCGI to communicate with the backend gems.

Well, we had issues as our usage ramped up. We’re now generating PDFs dynamically from one of our GLASS apps, and the PDF creation takes a little longer than your average web request. Not only that, but an inefficient piece of code in our PDF generation made the process even longer.

The bug was easy enough to profile and fix, but it worked out to be a good test of our server setup. What happens when our backend servers get busy during a long-running process? In our case, the answer was “the whole site goes comatose until it’s done”. Not good.

I thought the answer to this was obvious: we needed more FastCGI handlers on the backend, so if one of them was tied up during a long-running process. Apache doesn’t supply an easy way to distribute requests across FastCgiExternalServers, and although I know there are add-ons that can do this, they looked like serious overkill for what I’m trying to accomplish. I just want the requests farmed out to multiple dispatchers.

So I put nginx out front instead of Apache, since nginx has a round-robin load balancer that’s simple as pie to set up. It almost worked. Its round-robin logic is blind, so it distributes requests without taking into account how busy any individual backend is.

If this was a grocery store with 4 checkout lanes, it’s as if the store manager was standing in front of them, sending the first shopper to lane 1, the second to lane 2, etc., without ever looking behind him to see which lane had the shortest line. So if lane 3 is clogged up, you might just be unlucky enough to get put there — even though lanes 1, 2, and 4 are wide open.

I downloaded the source code for Cherokee, which is a really interesting-looking server. But its round-robin load balancer appears to work exactly the same way. Caveat: I haven’t actually tested it, just browsed the source for signs of hope and didn’t see any.

On the other hand, lighttpd maintains a running “load” for each FastCGI backend server. When an incoming request hits, lighttpd chooses from among the servers with the lightest load. This is exactly what I was looking for. Each shopper gets put in the shortest checkout line. Sure, if all lines are the same length when you arrive, you might still get put in the unlucky line, but… what’cha gonna do?

I understand that some benchmarks show Cherokee and nginx running slightly faster than lighttpd — but in a configuration like this, the frontend is such a small part of the overall load that I doubt a tiny speed advantage will make much difference. The queuing policies, however, make a noticeable difference. Under this configuration, our site is noticeably snappier, and I was able to send one of the FastCGI servers off into a tailspin without any of the site’s other visitors even noticing.


Filed under Uncategorized

Updated: SIXX + XMLPullParser for GemStone

I got past the issues I was facing when porting my SIXX + XMLPullParser combo to GemStone, and now both packages are available on GemSource.

  • XMLPullParser-kdt.8 loads cleanly into GemStone as well as Squeak/Pharo, so there’s nothing complicated there.
  • The SIXX package is a hybrid of my work (the SIXX package on SqueakSource) and Dale’s GemStone port (as of about version dkh.152) Hopefully we can sync those up soon.

I hope somebody finds this stuff useful!


Filed under Uncategorized