Slide 10
What are Our Problems?
As cores go from 43,008 to 1,000,000
Our algorith is stuck at 1000-2000 cores without major rework that we cannot afford
Not sure how many more cores we could utilize if we could afford a rework but << 1,000,000!!!
-------------- SUMMARY -------------
As the cores get slower, our execution time gets longer as we cannot use more cores
Yet users want more and better answers as computers get more powerful
OK, as user’s needs become more demanding
Slide 11
What are Our Problems?
HPCWire, 9/24/08, “Intel: CPUs Will Prevail Over Accelerators in HPC”
“What we're finding is that if someone is going to go to the effort of optimizing an application to take advantage of an offload engine, whatever it may be, the first thing they have to do is parallelize their code”
Richard Dracott, General Manager, HPC Business Unit
Slide 12
Am I (we) Unique?
There are many small and large ISV’s
Abaqus < 128 cores
Few open source packages can >>128 cores
Most (if not all) engineering, day-to-day packages cannot use more than 1000 cores
MCNPX can use <2000-4000 cores for certain types of problems
Most everybody needs help!
Those that do not need help can afford
To rewrite code when new architectures arrive
To write code from scratch to fit an architecture
Slide 13
Smarter compilers – User’s Point of View!
No new language, just amend Fortran and C
Like MP but with MPI – Programming Environment
Nice if housed in current environments
Intel
PGI
Absoft
etc…
Do not care if production compile takes days
Enable non-x86 hardware in current compilers
ASICs
GPGPU/Cell
FPGA
etc…
Slide 14
Nothing but the possible solutions
Develop new algoriths to solve the Boltzmann Equation so >1,000,000 cores can be utilized
Over 10M$ and 3 years to parallelize and V&V what we already have and must be done first!!
Over 100M$ and 10 years to develop new methods of solution to fit the vision of chip makers and then V&V the methods
Space Radiation is a small but unique solution domain of the total radiation analysis world