Install CouchDB on Mac OS X (without using DarwinPorts)

The CouchDB installation instructions recommend using DarwinPorts for its dependencies. There is also a pre-built binary. If for one reason or another you don’t want to do either of those, here’s how to do it ‘by hand’.

install gettext

(Assuming libxml is installed like this)

  ./configure \
    --with-libxml2-prefix=/usr/local/libxml2-2.7.0
  make
  sudo make install

install Locale::gettext

  sudo perl -MCPAN -e 'shell'
  cpan> install Locale::gettext

install help2man

  
  curl http://svn.macports.org/repository/macports/trunk/dports/textproc/help2man/files/patch-Makefile.in \
      > patch-Makefile.in
  patch -p0 < patch-Makefile.in
  ./configure
  make
  sudo make install

install ICU4C 4.x

  http://site.icu-project.org/
  cd source
  chmod +x runConfigureICU configure install-sh
  ./runConfigureICU MacOSX
  make
  sudo make install

install spidermonkey

(based on linux instructions, obsolete for 1.8.x and beyond)

  cd src
  make BUILD_OPT=1 -f Makefile.ref
  mkdir -p /usr/include/smjs/
  cp *.{h,tbl} /usr/include/smjs/
  cd Darwin_OPT.OBJ/
  cp *.h /usr/include/smjs/
  mkdir -p /usr/local/{bin,lib}/
  cp js /usr/local/bin/
  cp libjs.dylib /usr/local/lib/

install erlang

  ./configure
  make
  sudo make install

install latest CouchDB

(should also work with a downloaded source distribution)

  svn co http://svn.apache.org/repos/asf/couchdb/trunk
  cd trunk
  ./boostrap
  ./configure \
    --with-js-include=/usr/include/smjs \
    --with-js-lib=/usr/local/lib
  make
  sudo make install

Run CouchDB

  couchdb &
  open http://127.0.0.1:5984/_utils/browse/index.html

History/audit tables using MySQL triggers

If you are using mysql >= 5.1 with row-based replication (so you avoid this) then one possible pattern for keeping track of database changes (for history management, auditing, etc) is this mysql script.

Please note you still need to do a ton of stuff before your database is really audit-able.

A short url-safe identifier scheme

Let’s say you’re building a central-database system that you may want to make into a distributed system later. Then you don’t want to tie yourself to serial numeric identifiers (like i.e. the ones that Ruby on Rails is full of).

What do distributed platforms do?

They leave the id-generation problem to the user (though they will provide details based on some very-unique number). IDs are strings (UTF-8 or ascii-safe), and can be quite long:

250 characters seems like a pretty large upper limit.

128 random bits should be unique enough for anybody

UUIDs are 128 bits and are encoded as 32 characters (base16 with 4 dashes). The possibility of an identifier collision is really really tiny (random UUIDs have 122 random bits).

Unfortunately, UUIDs are ugly:

http://example.com/68ff9b72-7b6a-4ea4-b35f-77ff50f938fb

It is just not a nice url. It would be nice if we could take 128-bit numbers and encode them as base64, or maybe base62 or url-safe-base64, or maybe even as base36 for increased compatibility. A 128-bit number is 22 characters in base64, 25 characters in base36. You end up with:

http://example.com/f5lxx1zz5pnok6cyejdnd7ri9

What about 64 bits?

If we went with 64-bit numbers, we’d sacrifice quite a bit of collision-prevention, but maybe not so much that it is scary on a per-application basis.

What is also interesting is that lots of software supports operations on 64-bit numbers a lot better than on 128-bit numbers. We would end up with 13 characters in base36 (11 in base64). I.e. in base36 that looks like this:

http://app.example.com/3w5e11264sgsf

That seems kind-of good enough, for now. Having failed inserts into the database seems like a reasonable way to avoid identifier collision, especially if our application is rather neat REST (so a failed PUT can be re-tried pretty safely).

Moving to a distributed system safely is possible if we have some reasonable identifier versioning scheme (13 characters = version 0, 14 characters = scheme version 1-10, more characters = TBD). Then in our app we match our identifiers using ^[0-9a-z][0-9a-z-]{11,30}[0-9a-z]$ (a regex which will also match UUIDs).

Some ruby

def encode_id(n)
  return n.to_s(36).rjust(13,'0')
end

def decode_id(s)
  return s.to_i(36)
end

def gen_id()
  return encode_id( rand( 18446744073709551615 ) )
end

Some MySQL

Besides the above functions, some ideas on how to maintain some consistency for ids across data types (tables).

CREATE FUNCTION encode_id (n BIGINT) RETURNS char(13) NO SQL
  RETURN LPAD( LOWER(CONV(n,10,36)), 13, '0');

CREATE FUNCTION decode_id (n char(13)) RETURNS BIGINT NO SQL
  RETURN CONV(n,36,10);

CREATE FUNCTION gen_num_id () RETURNS BIGINT NO SQL
  RETURN FLOOR(RAND() * 184467440737095516);

CREATE FUNCTION gen_id () RETURNS char(13) NO SQL
  RETURN encode_id( gen_num_id() );

CREATE TABLE ids (
  -- this table should not be updated directly by apps,
  --   though they are expected to read from it
  numid BIGINT unsigned NOT NULL PRIMARY KEY,
  id char(13) NOT NULL UNIQUE,
  prettyid varchar(64) DEFAULT NULL UNIQUE
) ENGINE=InnoDB;

CREATE TABLE mythings (
  numid BIGINT unsigned NOT NULL PRIMARY KEY,
  id char(13) NOT NULL UNIQUE,
  prettyid varchar(64) DEFAULT NULL UNIQUE,
  something varchar(255) DEFAULT NULL
) ENGINE=InnoDB;

CREATE TABLE mythings2ids (
  -- this table should not be updated directly by apps,
  --   though its ok if they read from it
  numid BIGINT unsigned NOT NULL PRIMARY KEY,
  CONSTRAINT FOREIGN KEY (numid)
    REFERENCES ids (numid)
    ON DELETE cascade
    ON UPDATE cascade,
  CONSTRAINT FOREIGN KEY (numid)
    REFERENCES mythings (numid)
    ON DELETE cascade
    ON UPDATE cascade
) ENGINE=InnoDB;

DELIMITER |
CREATE TRIGGER mythings_before_insert BEFORE INSERT ON mythings
  FOR EACH ROW BEGIN
    INSERT INTO ids (numid,id,prettyid) VALUES (NEW.numid, NEW.id, NEW.prettyid);
  END
|
CREATE TRIGGER mythings_after_insert AFTER INSERT ON mythings
  FOR EACH ROW BEGIN
   INSERT INTO mythings2ids (numid) VALUES (NEW.numid);
  END
|
CREATE TRIGGER mythings_before_update BEFORE UPDATE ON mythings
  FOR EACH ROW BEGIN
    IF NEW.numid != OLD.numid THEN
      CALL CANNOT_CHANGE_NUMID_AFTER_CREATION;
    END IF;
    IF NEW.id != OLD.id THEN
      CALL CANNOT_CHANGE_ID_AFTER_CREATION;
    END IF;
    IF NEW.prettyid != OLD.prettyid THEN
      IF OLD.prettyid IS NOT NULL THEN
        CALL CANNOT_CHANGE_PRETTYID_AFTER_INIT;
      ELSE
        UPDATE ids SET prettyid = NEW.prettyid
          WHERE numid = NEW.numid LIMIT 1;
      END IF;
    END IF;
  END
|
CREATE TRIGGER mythings_after_delete AFTER DELETE ON mythings
  FOR EACH ROW BEGIN
   DELETE FROM ids WHERE numid = OLD.numid LIMIT 1;
  END
|
DELIMITER ;

-- SELECT gen_id() INTO @nextid;
-- INSERT INTO mythings (numid,id,prettyid,something)
--   VALUES (decode_id(@nextid),@nextid,
--       '2009/03/22/safe-id-names2','blah blah blah');

Some python

Python lacks built-in base36 encoding. Below is based on this sample, nicer than my own attempts that used recursion…

import string
import random

__ALPHABET = string.digits + string.ascii_lowercase
__ALPHABET_REVERSE = dict((c, i) for (i, c) in enumerate(__ALPHABET))
__BASE = len(__ALPHABET)
__MAX = 18446744073709551615L
__MAXLEN = 13

def encode_id(n):
    s = []
    while True:
        n, r = divmod(n, __BASE)
        s.append(__ALPHABET[r])
        if n == 0: break
    while len(s) < __MAXLEN:
        s.append('0')
    return ''.join(reversed(s))

def decode_id(s):
    n = 0
    for c in s.lstrip('0'):
        n = n * __BASE + __ALPHABET_REVERSE[c]
    return n

def gen_id():
    return encode_id(random.randint(0,MAX))

Diving into ruby on rails part 2

The coffee helped!

I’ve followed through the excellent getting started guide with no problems (though my demo site is about britney spears videos, not a blog), nipping out every now and then to check out reference documentation and rails source code (which is pretty tough to follow for now).

I’ve also installed modrails which worked as advertised (gotcha: they forget to mention you should also set up access permissions for your $railsapproot/public).

modrails performance, no database

With the same apache config I settled on for mod_wsgi, out of the box, performance is reasonable:

$ ab -k -n 10000 -c 100 http://127.0.0.1:81/
Requests per second:    508.93 [#/sec] (mean)
Time per request:       196.490 [ms] (mean)
Time per request:       1.965 [ms]
          (mean, across all concurrent requests)

With some very basic tuning:

PassengerHighPerformance on
RailsSpawnMethod smart
PassengerMaxPoolSize 30
PassengerPoolIdleTime 0
PassengerStatThrottleRate 300

I don’t see that much difference:

Requests per second:    533.85 [#/sec] (mean)
Time per request:       187.319 [ms] (mean)
Time per request:       1.873 [ms]
           (mean, across all concurrent requests)

The ruby processes take about 4-5% CPU per process, the httpd ones take about 0.6% per process. So while the overhead of ruby on rails is pretty significant, its really not shocking considering how much .

The built-in mongrel server in development mode does about 40 req/s, so you really don’t want to use that as a guide for performance benchmarking.

modrails performance, with database

Using the sqlite3 database backend with a very simple page:

$ ab -k -n 10000 -c 100 http://127.0.0.1:81/videos/1/comments/3
Requests per second:    256.87 [#/sec] (mean)
Time per request:       389.302 [ms] (mean)
Time per request:       3.893 [ms] (mean, across all concurrent requests)

Let’s try mysql…

production:
  adapter: mysql
  encoding: utf8
  database: britneyweb
  pool: 30
  username: root
  password:
  socket: /tmp/mysql.sock
$ sudo gem install mysql -- \
  --with-mysql-config=/usr/local/mysql/bin/mysql_config
$ RAILS_ENV=production rake db:migrate
$ sudo apachectl restart
# create some sample data in production db...
$ mysql -u root britneyweb
mysql> analyze table comments;
mysql> analyze table tags;
mysql> analyze table videos;
mysql> show create table comments \G
*************************** 1. row ***************************
       Table: comments
Create Table: CREATE TABLE `comments` (
  `id` int(11) NOT NULL AUTO_INCREMENT,
  `commenterName` varchar(255) DEFAULT NULL,
  `commenterUrl` varchar(255) DEFAULT NULL,
  `commenterEmail` varchar(255) DEFAULT NULL,
  `body` text,
  `video_id` int(11) DEFAULT NULL,
  `created_at` datetime DEFAULT NULL,
  `updated_at` datetime DEFAULT NULL,
  PRIMARY KEY (`id`)
) ENGINE=InnoDB AUTO_INCREMENT=4 DEFAULT CHARSET=utf8
1 row in set (0.00 sec)
$ 

Hmm, now the machine is swapping. Tuning down a bit, and then:

Requests per second:    250.00 [#/sec] (mean)
Time per request:       400.004 [ms] (mean)
Time per request:       4.000 [ms]
        (mean, across all concurrent requests)

I can’t really get it to go faster. It seems we are pretty much CPU-bound, with the vast majority of CPU going to ruby processes.

Adding page caching

Adding this tiny bit of code to the comments controller:

class CommentsController < ApplicationController
  ...
  caches_page :show
  ...

Helps a ‘bit’:

Requests per second:    4398.80 [#/sec] (mean)
Time per request:       22.733 [ms] (mean)
Time per request:       0.227 [ms]
        (mean, across all concurrent requests)

Now I need a cache sweeper:

class CommentsController  [:create, :update, :edit, :destroy]
  ...
  caches_page :show
  cache_sweeper :comment_sweeper, :only => [:create, :update, :edit, :destroy]

Enabling memcached cache backend in config/production.rb:

config.cache_store = :mem_cache_store, 'localhost',  \
   '192.168.1.1:11211', { :namespace => 'bwprod' }

…is not much faster (so bottleneck is probably elsewhere):

Requests per second:    4461.97 [#/sec] (mean)
Time per request:       22.412 [ms] (mean)
Time per request:       0.224 [ms]
      (mean, across all concurrent requests)

Lessons learned:

  • raw apache + modrails + rails can do a reasonable 500 req/s on my laptop when not connecting to a database, and a reasonable 400 req/s when connecting to sqlite3 for a simple page.
  • I shouldn’t really attempt to performance-tune modrails.
  • ActiveRecord seems good at chewing up CPU.
  • rails page caching makes things go fast, above 4000 req/s on my laptop.
  • rails + memcached is trivial to set up.

Diving (back) into ruby on rails

First lesson learned: if you’re like me and you bought a new mac recently that came with OS X 10.5, and you have installed the developer tools (from your install DVDs), then more likely-to-work instructions for installing ruby on rails on mac os x 10.5:

$ sudo gem update --system
$ sudo gem install activeresource
$ sudo gem update
$ rails path/to/your/new/application
$ cd path/to/your/new/application
$ ruby script/server

How the lesson was learned

I abandoned Ruby on Rails somewhere in its 1.x days because it was unstable, evolving too fast, and because the deployment architecture (FastCGI) was stupid. The later seems to have gotten much better, and apparently my macbook even comes with rails installed. Rails is on version 2.3, which “sounds” pretty mature. So let’s dive in!

I go to this page and click the red get started arrow. The download page scares me a bit with comments about upgrading to ruby 1.9, but I figure out I should be able to do this:

sudo gem update rails
mkdir foo
rails foo

But I get this:

/System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/1.8/rubygems.rb:379:
in `report_activate_error': RubyGem version error: rake(0.7.3 not >= 0.8.3) (Gem::LoadError)
	from .../rubygems.rb:311:in `activate'
	from .../rubygems.rb:337:in `activate'
	from .../rubygems.rb:336:in `each'
	from .../rubygems.rb:336:in `activate'
	from .../rubygems.rb:65:in `active_gem_with_options'
	from .../rubygems.rb:50:in `gem'
	from /usr/bin/rails:18

Ploughing on looked like it would be depressing:

$ sudo gem update rake
$ rails foo
$ ... RubyGem version error: activesupport(1.4.4 not = 2.3.2) ...

Grr. Time for a bit of RTFM.

$ gem help
$ gem help commands
$ sudo gem update

It seems I’m now installing various not-yet-1.0 tools with impressive names such as acts_as_ferret and hpricot. I also got a bunch more errors:

ERROR:  Error installing RedCloth:
	RedCloth requires RubyGems version >= 1.2
ERROR:  While generating documentation for actionpack-2.3.2
... MESSAGE:   Unhandled special: Special: type=33, text="TODO"
ERROR:  While generating documentation for activerecord-2.3.2
... MESSAGE:   Unhandled special: Special: type=33, text="Fixtures"
ERROR:  While generating documentation for activesupport-2.3.2
... MESSAGE:   Unhandled special: Special: type=33, text="TODO"
ERROR:  While generating documentation for acts_as_ferret-0.4.3
... MESSAGE:   Unhandled special: Special: type=33, text="Ferret"
ERROR:  Error installing capistrano:
	capistrano requires RubyGems version >= 1.2
ERROR:  While generating documentation for daemons-1.0.10
... MESSAGE:   Unhandled special: Special: type=33, text="All"
ERROR:  Error installing fastthread:
	fastthread requires RubyGems version >= 1.2
ERROR:  While generating documentation for ferret-0.11.6
... MESSAGE:   Unhandled special: Special: type=33, text="Provides"
ERROR:  While generating documentation for highline-1.5.0
... MESSAGE:   Unhandled special: Special: type=33, text="A"
ERROR:  While generating documentation for hpricot-0.7
... MESSAGE:   Unhandled special: Special: type=33, text="Detect"
ERROR:  While generating documentation for libxml-ruby-1.1.2
... MESSAGE:   Unhandled special: Special: type=33, text="The"
ERROR:  While generating documentation for mongrel-1.1.5
... MESSAGE:   Unhandled special: Special: type=33, text="Alias"
ERROR:  Error installing net-sftp:
	net-sftp requires RubyGems version >= 1.2
ERROR:  Error installing net-ssh:
	net-ssh requires RubyGems version >= 1.2
ERROR:  While generating documentation for ruby-openid-2.1.4
... MESSAGE:   Unhandled special: Special: type=33, text="A"
ERROR:  While generating documentation for sqlite3-ruby-1.2.4
... MESSAGE:   Unhandled special: Special: type=33, text="This"

Even less impressive:

Gems updated: RedCloth, actionmailer, actionpack, activerecord,
activesupport, acts_as_ferret, capistrano, daemons, dnssd,
fastthread, ferret, highline, hpricot, libxml-ruby, mongrel,
net-sftp, net-ssh, ruby-openid, rubynode, sqlite3-ruby

It says it failed to install RedCloth, but then later on it says it updated it anyway! I wonder if I just b0rked my out-of-the-box ruby setup… Pressing on, according to the RubyGems docs I need to run

$ sudo gem update --system
...
RubyGems installed the following executables:
	/System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/bin/gem

If `gem` was installed by a previous RubyGems installation, you may need
to remove it by hand.
$ ls -l `which gem`                                                                                                                                             dev
/usr/bin/gem ->
   ../../System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/bin/gem

Will it be happier, now?

$ sudo gem update
Updating installed gems
...
Gems updated: RedCloth, net-ssh, net-sftp, net-scp, net-ssh-gateway,
capistrano, fastthread

That looks ok. Looks like I’ll be lacking some documentation for some of my gems, but I can live with that. Try again:

$ rails foo
/Library/Ruby/Site/1.8/rubygems.rb:636:in `report_activate_error':
Could not find RubyGem activeresource (= 2.3.2) (Gem::LoadError)
	from /Library/Ruby/Site/1.8/rubygems.rb:141:in `activate'
	from /Library/Ruby/Site/1.8/rubygems.rb:165:in `activate'
	from /Library/Ruby/Site/1.8/rubygems.rb:164:in `each'
	from /Library/Ruby/Site/1.8/rubygems.rb:164:in `activate'
	from /Library/Ruby/Site/1.8/rubygems.rb:49:in `gem'
	from /usr/bin/rails:18
$ gem list | grep active
activerecord (2.3.2, 1.15.6)
activesupport (2.3.2, 1.4.4)

Not impressed. Time to go find a list of dependencies. Oh, its the last one remaining. Duh.

$ sudo gem install activeresource
$ rails foo
      ...
      create  log/test.log
$ cd foo
$ ruby script/server
$ curl http://localhost:3000/

Yay! Success. Time taken to get rails 2.3.2 install on OS X: ~40 mins. Current mood: feel like an idiot. Feel like ranting about how to do dependency management that doesn’t suck at the ruby/rails community, and about testing your installation instructions. Time for coffee.

Getting a feel for the cost of using mysql with WSGI

So is the database the bottleneck?

Note that compared to the last post, I enabled KeepAlive, i.e. I added the -k flag to ab:

ab -k -n 10000 -c 100 http://localhost/hello.txt

I also toned down the MPM limits a bunch:

StartServers         16
MinSpareServers      16
MaxSpareServers     128
MaxClients           64
MaxRequestsPerChild   0

(Apache will now complain about reaching MaxClients, but given the db is on the same machine, things end up faster overall, this way)

And then I decided to just run ‘ab’ locally since I couldn’t really see that much difference vs running it over the network.

Then I proceeded to get myself a MySQL 5 server with a single InnoDB table inside with a single row in it that reads (key:1, value:'Hello world!'). I also set up a memcached server. Now, look at this…

What? Requests per second relative performance hit CPU usage
hello.txt 9700 httpd 2.2% per process
python string 7500 23% httpd 2% per process
memcached lookup 2100 23% * 72% memcached 10%, httpd 2% per process
_mysql 1400 23% * 82% MySQL 77%, httpd 1.2% per process
mysql 880 23% * 82% * 37% MySQL 65%, httpd 4.5% per process
SQLAlchemy 700 23% * 82% * 37% * 20% MySQL 19%, httpd 5.5% per process

Using a mysql database backend costs 82% req/s compared to the “serve python string from global variable”. Totally expected of course, and we have also been taught how to deal with it (caching).

In this example, we’re not actually getting a lot out of using memcached – the memcached approach (I’m using python-memcached) is still 72% slower than having the in-process memory, though if you look at it the other way around, it is 50% faster than using the _mysql driver. Nevertheless, this shows that the mysql query cache is pretty fast, too, esp. if you have a 100% cache hit ratio 🙂

Using the SQLAlchemy ORM framework makes things 20% slower, and it gobbles up a lot of CPU. We also expected that, and we know how to deal with (hand-optimize the SQL in performance-sensitive areas, or perhaps use a faster ORM, like storm).

But, did you know that you can take a 37% performance hit just by using the _mysql bindings directly instead of the MySQLdb object oriented wrappers? Looks like some optimizations should be possible there…

Now, in “real life”, imagine a 90% hit ratio on the memcache, and for the other 10% there’s some expensive ORM processing. Further imagine a 5% performance hit in the logic to decide between memcache and ORM layer. You’d end up doing about 1850 requests per second.

This leads to Leo’s WSGI performance rule of thumb number 1: Even if your python webapp is really really really fast, it will still be about 5 times slower than apache serving up static files.

Getting a feel for the performance of mod_wsgi

Disclaimer: there’s a lot that’s wrong with my testing setup and/or methodology. Given the kind of concurrency I’m playing around with, I run into a whole bunch of operating system and TCP/IP limits. Though I’ve tweaked those a little bit (for example # of open file descriptors, TCP window size), I haven’t been particularly scientific about it. Though I did various warm-ups and system reboots, I wasn’t particularly good at keeping consistent timing or consistently checking that there were no lingering connections in TIME_WAIT or whatever.

System under test:

  • Server: Latest-gen macbook 2.4ghz intel core 2 duo
  • Test client: Mac Pro tower dual core G5
  • Switch: some crappy 1gbit whitelabel
  • Stock Apache on Mac OS X 10.5 (prefork MPM, 64bit)
  • Stock python 2.5 on Mac OS X 10.5
  • mod_wsgi 2.3
  • ApacheBench, Version 2.3, commands like ab -n 5000 -c 100 http://10.0.0.2/wsgitest

Out of the box:

  • about 2000-3000 req/s for static file
  • about 2000-2800 req/s for mod_wsgi in-process
  • about 1500-2500 req/s when using mod_wsgi in daemon mode (with errors beyond concurrency of about 250, for various settings of p, t)
  • concurrency=1000 makes ApacheBench start reporting lots of failures

Follows some MPM tuning, arriving to:

StartServers        100
MinSpareServers      10
MaxSpareServers     500
MaxClients          500
MaxRequestsPerChild   0

Results then become better (especially more stable):

  • about 5000 req/s for static file
  • With EnableMMAP off and EnableSendfile off, even concurrency=10 is already a problem for static file scenario, and req/s doesn’t go above 3000 req/s for concurrency>=5
  • about 4300 req/s for mod_wsgi in process
  • about 2700 req/s for mod_wsgi in daemon mode
  • concurrency=1000 still makes ApacheBench start reporting lots of failures

Some more data:

                    hello.txt           wsgitest        wsgitest    wsgitest    wsgitest
                                        in-process      p=2,t=10    p=2,t=100   p=20,t=10
concurrency 10      req/s:  4784
                    ms/req:    0.21

concurrency 100     req/s:  5081        4394            3026        3154        2827
                    ms/req:    0.20        0.23            0.33        0.32        0.35

concurrency 200     req/s:  5307        4449            2688        2988        2711
                    ms/req:    0.19        0.24            0.37        0.34        0.36

concurrency 500     req/s:  4885        4137            2779        3019        2738
                    ms/req:    0.21        0.24            0.36        0.33        0.36

hello.txt is a 13 byte file containing "Hello World!\n"
wsgitest is a really simple wsgi script spitting out "Hello World!\n"
concurrency is the argument to ab -c
p is number of processes for mod_wsgi daemon mode
t is number of threads for mod_wsgi daemon mode
ms/req is the mean time per request across all concurrent requests as reported by ab

Tentative conclusions:

  • With my hardware I have little chance of actually finding the limits of apache or mod_wsgi unless I spend a whole lot of time on much more carefully testing what I’m actually measuring
  • running mod_wsgi in-process is probably a good idea if you tweak the MPM for it
  • mod_wsgi on my laptop can probably easily serve over a billion requests/month after a little tuning
  • mod_wsgi on my laptop can deal with 500 “concurrent” users without errors

…so, in other words, mod_wsgi is most likely “fast enough” and most likely will no be a bottleneck if I build something with it. Not exactly surprising. But, more importantly, now I have some baseline numbers for req/s performance on my system, so that I can run some “performance smoke tests” against software I write.