Crush wrote:Those upcomming computing plattforms with hundreds of CPU cores are an interesting technology indeed, but unfortunately they require a completely new way to develop software to make use of them.
Parallel computing is not that new: I had to deal with them when they arrived first in France while my Doctor thesis in 1982 an Amdahl computer). The first Cray1 computer was acquired for meteorology needs a little before. It has been developped at ahuge speed along with computers' technologies. At the moment each respectable video card graphic processor uses this technology. For central units, they were first multiscalar (multi math-coprocessor, MMX) and now multi-core and even "splitable" cores (hyperthreading)
Most programs today run as a single thread on a single CPU
.
All (respectable) graphic software (including manaplus) take advantage of GSP, all (respectable) audio software can take advantage of DSP and are de facto multi threaded or multi scalar.
And most programming languages widely used today are designed for writing such single-threaded programs. Now that CPUs stopped to become notably faster and only scale by putting more and more cores into a single system, this paradigm has to be reconsidered.
Parallel computing does not require any language modification nor a great modification of algorithms but a different way of programming. Parallelization occurs only at compile time. When I used it first, I had to include compiler directives in code, but nowadays ones can automate loop vectorization and vector computations (SIMD).
But changing a program designed to be single-threaded to make use of multiple CPU cores is a very hard task. Multi-threaded programs need to be designed completely different than single-threaded ones. It's not just a port to a different architecture, it usually requires a complete redesign of the whole software architecture. Also, most of the programming languages used today aren't very suitable for massive parallelization. Writing programs which uses multiple threads in procedural or object-oriented languages is often cumbersome and error-prone.
No software design is an easy task. Once again Parallelization has little to do with languages, but with compilation. The main and most difficult task is to deal with is data dependencies, and not introduce new ones at the coding step.
https://computing.llnl.gov/?set=code&page=intel_vector
The industry is working mostly with object-oriented languages.
No, Industry is working mostly with vector, varallel, multicore compiling and object-oriented languages or you would not have one week weather forecast, collaborative engineering (architecture, mechanics and electronics) design, CAD and CAM, or even 3D virtual reality cinema pictures and video games, fast data acquisition and real time treatment.
Refs:
There is even a GNU shell tool to do several tasks in parallel:
http://www.gnu.org/software/parallel/
"The language of everyday life is clogged with sentiment, and the science of human nature has not advanced so far that we can describe individual sentiment in a clear way." Lancelot Hogben, Mathematics for the Million.
“There are two motives for reading a book; one, that you enjoy it; the other, that you can boast about it.” Bertrand Russell, Conquest of Happiness.
"If you optimize everything, you will always be unhappy." Donald Knuth.