You might want to look into things like gzip compression and maybe a site 
redesign with CSS.
Both can bring your bandwith down a lot which could bring a better TCO to 
the site.

When going with the static site idea you can also get a low end server with 
mysql / rails / php / whatever to generate the pages.
The main server would then only have to serve the static pages.



"SEan Wolfe" <nospam / nowhere.com> wrote in message 
news:11j95pb80n02rcd / news.supernews.com...
> snacktime wrote:
>> On 9/23/05, SEan Wolfe <nospam / nowhere.com> wrote:
>> Do you mean simultaneous users or actually completed requests per second?
>> 200 req/sec is over 8 million requests in a 12 hour period. 200 
>> simultaneous
>> connections is another story, and shouldn't be a real issue for
>> apache/lighttpd and rails.
>
> Well, basically the site currently serves about 160,000 visits/day, 
> 8,000,000 pages/day, and 18,000,000 hits/day and pushes out about 
> 135GB/day. This is all currently being served by apache and static HTML 
> with a lot of includes.
>
> Their box is a 4x CPU, linux box, with 1.5GB of RAM. I took some activity 
> samples, and it seems that their load on the box averages about 35-40%. 
> MySql is pretty much idle, since currently it's barely used.
>
> They update about every hour, but if they had some sort of CMS, they would 
> like to update more frequently, be even more dynamic.
>
> The idea of keeping things on static pages seems fine. I just don't know 
> how responsive it would be to frquent updates. Each update changes 
> different navigation in several locations (always keeping the freshest 
> content within a click, from any location on the site).
>
>> The main issues you will have is setting up load balancing and failover. 
>> We
>> use ServerIrons a lot as front end load balancers. Personally I love them 
>> as
>> they take a lot less time to manage then most open source solutions and 
>> have
>> a lower failure rate. Distributing access to your databases will probably 
>> be
>> one of the main challenges, as well as having some type of failover
>> mechanism. For a good general caching system you might take a look at
>> memcached. We use it a lot and it's a great tool for taking the load off 
>> of
>> your database servers.
>
> Currently, their emphasis is on Cost to Implemenet at the moment. Their 
> Hosting service currently provides them with one dedicated box, and a 
> block of bulk bandwith. The provider thinks that a CMS would require a 
> second box to split the load. This would add additional monthly costs to 
> their plan. So any dedicated load balancer is up to the location company, 
> and the cost to implement. It's hard to believe that such a large site has 
> such a small bugget! :P
>
> Becasue of cost, they were also looking into off-the-shelf open source 
> solutions. But most of the ones that I've seen didn't seem to fit well, or 
> were just a big mess of PHP.
>
> Also there is the issue with my costs as well, since I don't want to spend 
> a great deal of time developing the site. Ruby makes a great choise for 
> this aspect.
>
>> In our case everything is dynamic so we don't have a use for squid, 
>> although
>> you could use that in conjunction with a ServerIron, or in place of if 
>> you
>> just want to throw up several linux boxes with squid and use round robin
>> dns. Personally I don't care much for messing with dns techniques unless 
>> you
>> need to geographically distribute your servers.
>
> The only DNS trick I guess I was thinking of possibly doing is having two 
> sites, one for the CMS application where the editors and writers add their 
> content, and then one site that the visitors see. The CMS site would then 
> publish content to the live site. The CMS site could be more heavy on the 
> app side, and not need any serious hardware specs, since it would only 
> have to serve at most, about 50 users a day.
>
>
> Anyways, this is still in the very early stages yet. I would like to hear 
> more input!
>
> thanks,
>
> Sean