Our first production Solaris x86 server is on line, and it's not a fileserver as we'd originally planned.
"mr4.umbc.edu", a 2x Xeon Dell 2650 with 3G of RAM was jumpstarted to Solaris 10 yesterday afternoon, and is currently serving imap/pop service to over 400 users as part of the imap/pop service cluster. The other machines in the cluster (mr5 - 8) with similar hardware are currently running Linux.
So, if you're reading mail right now, there's a 1/5 chance you're using it...
I'm not quite happy with it's configuration, however. The OpenAFS client under Solaris 10 is still having some problems dealing with having a disk cache, so it's running with a 400M memory cache. Why so small? Because the 32-bit Solaris kernel can only handle, at max, allocating 512M of kernel memory. The 64-bit Solaris kernel doesn't have that restriction (the amount is tunable).
However, even with the reduced cache, it's performing quite well. I'm not complaining.
It lasted about an hour; after throwing some more load at it, it froze up. Froze up very similar to how I saw it freeze up when trying to use a "just too large" AFS memory cache; so, the kernel probably ran out of memory and decided to to go poof.
So, I looked back into the problem of getting the disk cache working under OpenAFS. Some similar problems I had been seeing were also mentioned in this https://lists.openafs.org/pipermail/openafs-devel/2004-November/011177.html. As the 2650 is a 32 bit machine, it's possible that I'd be running into the same problems. I patched OpenAFS with this:
*** param.sunx86_510.h.orig Fri Apr 15 15:21:29 2005
--- param.sunx86_510.h Fri Apr 15 14:19:01 2005
*** 34,40 ****
--- 34,42 ----
#define AFS_X86_ENV 1
#define AFS_64BIT_ENV 1 /* Defines afs_int32 as int, not long. */
+ #if defined(__x86_64)
#define AFS_64BIT_CLIENT 1
#define AFS_HAVE_FLOCK_SYSID 1
and tested it by, what else, doing a build of OpenAFS using a disk cache... And, no problems... So, mr4 is back in production now...