Page 1 of 1

To follow: Parallella, A Supercomputer For Everyone

Posted: 18 Apr 2013, 13:49
by Nard
I recently found this project:
Parallella: A Supercomputer For Everyone
I found personal scientific applications of such an architecture, but I think it would also be well suited for mmo game hosting...

Btw I received my rapsberry Pi and am impatient to try it as TMW client or LAN server :P, probably not the best one, but among the cheapest... :P

Re: To follow: Parallella, A Supercomputer For Everyone

Posted: 18 Apr 2013, 14:29
by Crush
Those upcomming computing plattforms with hundreds of CPU cores are an interesting technology indeed, but unfortunately they require a completely new way to develop software to make use of them.

Most programs today run as a single thread on a single CPU. And most programming languages widely used today are designed for writing such single-threaded programs. Now that CPUs stopped to become notably faster and only scale by putting more and more cores into a single system, this paradigm has to be reconsidered.

But changing a program designed to be single-threaded to make use of multiple CPU cores is a very hard task. Multi-threaded programs need to be designed completely different than single-threaded ones. It's not just a port to a different architecture, it usually requires a complete redesign of the whole software architecture. Also, most of the programming languages used today aren't very suitable for massive parallelization. Writing programs which uses multiple threads in procedural or object-oriented languages is often cumbersome and error-prone.

There is, however, the family of functional programming languages which are often much more suitable for parallelization. Because they avoid side-effects and mutable data, the compiler is much more able to detect automatically when certain operations are possible to be parallelized to multiple cores. But functional languages are currently mostly restricted to academic circles. The industry is working mostly with object-oriented languages.

Re: To follow: Parallella, A Supercomputer For Everyone

Posted: 18 Apr 2013, 16:09
by Nard
Crush wrote:Those upcomming computing plattforms with hundreds of CPU cores are an interesting technology indeed, but unfortunately they require a completely new way to develop software to make use of them.

Parallel computing is not that new: I had to deal with them when they arrived first in France while my Doctor thesis in 1982 an Amdahl computer). The first Cray1 computer was acquired for meteorology needs a little before. It has been developped at ahuge speed along with computers' technologies. At the moment each respectable video card graphic processor uses this technology. For central units, they were first multiscalar (multi math-coprocessor, MMX) and now multi-core and even "splitable" cores (hyperthreading)
Most programs today run as a single thread on a single CPU
.
All (respectable) graphic software (including manaplus) take advantage of GSP, all (respectable) audio software can take advantage of DSP and are de facto multi threaded or multi scalar.
And most programming languages widely used today are designed for writing such single-threaded programs. Now that CPUs stopped to become notably faster and only scale by putting more and more cores into a single system, this paradigm has to be reconsidered.
Parallel computing does not require any language modification nor a great modification of algorithms but a different way of programming. Parallelization occurs only at compile time. When I used it first, I had to include compiler directives in code, but nowadays ones can automate loop vectorization and vector computations (SIMD).
But changing a program designed to be single-threaded to make use of multiple CPU cores is a very hard task. Multi-threaded programs need to be designed completely different than single-threaded ones. It's not just a port to a different architecture, it usually requires a complete redesign of the whole software architecture. Also, most of the programming languages used today aren't very suitable for massive parallelization. Writing programs which uses multiple threads in procedural or object-oriented languages is often cumbersome and error-prone.

No software design is an easy task. Once again Parallelization has little to do with languages, but with compilation. The main and most difficult task is to deal with is data dependencies, and not introduce new ones at the coding step.
https://computing.llnl.gov/?set=code&page=intel_vector
The industry is working mostly with object-oriented languages.
No, Industry is working mostly with vector, varallel, multicore compiling and object-oriented languages or you would not have one week weather forecast, collaborative engineering (architecture, mechanics and electronics) design, CAD and CAM, or even 3D virtual reality cinema pictures and video games, fast data acquisition and real time treatment.

Refs: There is even a GNU shell tool to do several tasks in parallel: http://www.gnu.org/software/parallel/

Re: To follow: Parallella, A Supercomputer For Everyone

Posted: 18 Apr 2013, 16:15
by o11c
The kind of parallelization that is done automatically by compilers is simply not comparable to true parallelization.

The biggest killer of parallelization - any kind - is data dependencies, which are very tightly tied to languages.

Re: To follow: Parallella, A Supercomputer For Everyone

Posted: 18 Apr 2013, 16:19
by Nard
o11c wrote:The kind of parallelization that is done automatically by compilers is simply not comparable to true parallelization.

The biggest killer of parallelization - any kind - is data dependencies, which are very tightly tied to languages.
No it is not done automatically if you didn't care before about data dependencies, or if you make loops in wrong order. Data dependencies depend on data, or on algorithm , not on code or language.

Edit: The parallelization can be done only by compilers and if the code is compatible. Parallelization begins with 2 coprocessors or cores (even virtual ones)
Trivial example: compute the scalar product of 2 vectors A [a(i)] and B [b(i)]:
loop1:

Code: Select all

i=1
n= WhateverYouWant;
SP=0;
repeat until i=n
    SP=SP+ a(i)*b(i);
print SP;
loop2:

Code: Select all

i=1;
n= WhateverYouWant;
Initialize SP and TP(*) to 0;
repeat until i=n
    TP(i)= a(i)*b(i);
repeat until i=n
    SP=SP+ TP(I);
print SP;
The first program is not automatically vectorizable but uses little memory, the second uses WhateverYouWant more memory but is vectorizable (including memory I/O)

Re: To follow: Parallella, A Supercomputer For Everyone

Posted: 19 Apr 2013, 03:58
by AnonDuck
I actually have a Parallella on pre-order, can't wait to play with it. A big limitation of this platform is that the Epiphany cores do not have very much local memory (Not even enough to hold the code for printf()). There is a library that allows them to utilize external memory, but that adds a lot of overhead...

For this sort of highly limited setup nothing will beat well thought out and hand-optimized code. Which makes me very happy as this seems to be a dying artform as hardware and software become more powerful :) I hope to do interesting things with it.

Re: To follow: Parallella, A Supercomputer For Everyone

Posted: 19 Apr 2013, 09:17
by Nard
MadCamel wrote: A big limitation of this platform is that the Epiphany cores do not have very much local memory (Not even enough to hold the code for printf()). There is a library that allows them to utilize external memory, but that adds a lot of overhead...
The performances will strongly depend on the compiler's ones. Probably Epiphany should be reserved for basic arithmetic operations, and other I/O left to the main cores. I guess many people follow the project and will help for that, if the idea is not "stolen" by some major company and released before. :/