Tuesday, November 20, 2007

Re: 64 bit computers systems

Truitt Vance wrote:
Message

Josh, maybe you or somebody here could answer this then:

 

I built all our office computers to Core 2 Duos last year and earlier this year and am really happy w/ the performance of these machines.  I guess I sort of assumed that most applications could take advantage of the two cores and speed things up a bit though.  I now know that the main advantage here is that your processor isn’t SLAMMED by one application, you still have 50% or your resources to do other things when you are running solutions on RISA or whatever the demanding program may be.  Although this is great, the one processor is actually slower than some of the hot running single processors (3GHz+), so net processing time increases!  Hmmmm, is that good?

 

When are applications going to be written to use multiple cores?  Can that be done with your program? (if not, when?)  We use Vectorworks for drafting, love the program…but REALLY wish they would take advantage of dual cores.  Maybe I should write them.  I don’t think other progs other than your CAD’s or FEA’s really need to accomplish this.

 

But then again….depending on how efficiently the programs use the cores, our computers could go back to being slammed when we push solve…or render something.

 

Truitt Vance

 

 

 


From: Josh Plummer [mailto:josh.plummer@cox.net]
Sent: Tuesday, October 30, 2007 7:50 AM
To: seaint@seaint.org
Subject: RE: 64 bit computers systems

 

Bill Polhemus wrote:

"The chances of there being a single application that needs more than 4 GB of physical memory are pretty slim for the next decade or so."

That's a common misconception, Bill.  Yes, the OS handles the address space.  But, it's still going to limit how much memory a 32 bit program is allowed to use. 

 

Those who are more removed from day to day analysis work or who work on smaller structures always think that no one could EVER use 4 Gigs of memory.  I won't say that running into that limit is common, but it happens way, way more often than you imply.

 

Before RISA added it's new "sparse solver", I'd say that people would contact me at least once a month regarding that memory issue.  Since many of the other common programs (RAM?, STAAD?) do not have a true sparse solver, I imagine that they run into the issue just as often as we used to... especially for large models that require lots of plate elements. 

 

Then you've got ASCE requirements for quartering winds and eccentric earthquakes.  Run all of these for both strength and serviceability load combinations at the same time, and you quickly get to the 150+ load combinations.  For models with tens of thousands of members and plates you'd be surprised how relatively easy it is to run into that memory limit.

 

Sincerely,

 

 

Josh Plummer, SE

 

 


From: Bill Polhemus [mailto:bill@polhemus.cc]
Sent: Monday, October 29, 2007 8:21 PM
To: seaint@seaint.org
Subject: Re: 64 bit computers systems

Josh Plummer wrote:

Jason -

 

Another argument against going with a 64 bit system that nobody mentioned:  Even if you have 32 Gigs of memory in your new system, your engineering applications will (most likely) not be able to use more than that 3.5 or 4 Gigs!! 

That application's address space really isn't an issue, though. The OS handles that. The application doesn't care where it's loaded in memory just as long as it can play peacefully there.

The chances of there being a single application that needs more than 4 GB of physical memory are pretty slim for the next decade or so.

This sounds more like a "throughput" problem.

And I think that it is more than just "possible" for applications to use more than just a single core. Remember that the more sophisticated apps are going to use multiple threads, for example, to "parallel process" data.