off topic - parallella

classic Classic list List threaded Threaded
5 messages Options
Roger Critchlow Roger Critchlow
Reply | Threaded
Open this post in threaded view
|

off topic - parallella

Sorry for the off-topic post, but http://www.kickstarter.com/projects/adapteva/parallella-a-supercomputer-for-everyone has 28 hours to meet its funding goal, and the reward for a $99 pledge is a 2.1 by 3.4 inch board with a dual core ARM, 1 gig memory, an FPGA, a 16 core RISC multiprocessor, USB, HDMI, Ethernet, microSD, and GPIO.  25 GFLOPS at 5 watts.  It's a long way from your typical embedded hardware, or your typical eLua target, but it's also a collection of capabilities that's likely to start turning up in embedded systems over the next decade.

-- rec --

_______________________________________________
eLua-dev mailing list
[hidden email]
https://lists.berlios.de/mailman/listinfo/elua-dev
Antti Lukats-2 Antti Lukats-2
Reply | Threaded
Open this post in threaded view
|

Re: off topic - parallella

Hi

some things that sound too good to be true are?
The 99 USD price is very bold promise, indeed it is - we are at this
very moment designing a embedded module with Zynq 7020 and we have
some of those zed boards in our office too (those boards that are used
to prototype the parallel).

Some comments:

7020 CES pricing is around 300 USD, there are no better price quotes
from any official channels.
There is NO PRICE been quoted from production (not CES) price for 7020
There have been NO PRICE infos for 7010 ever from any channels.
There ARE RUMORS Xilinx may not produce 7010 at all..

Parallella promises 2x USB and GbE, those functions are not possible
to map to the ARM MIO pins at the same time, zed board has one USB and
one GbE. Sure it is possible to assign 2 USB ports to the ARM PS
system and route the GbE using wrapper and through FPGA core fabric,
its doable but clumsy and it means that there is no way to update the
system over ETH when FPGA is not configured and only ARM PS has
booted.

Hm.. it would be REALLY REALLY rock if they can produce those boards
for 99USD.. !!
Zedboard.org sells for 395 (299 academic) price, and this price is
already partially sponsored by Xilinx and other silicon vendors.

hmmm it looks like they are going to make the goal, the total has been
jumping in the last hours..

Antti
PS the module we are working on, is sized 40x50 mm a bit smaller than
Parallella :)








On Fri, Oct 26, 2012 at 8:26 PM, Roger Critchlow <[hidden email]> wrote:

> Sorry for the off-topic post, but
> http://www.kickstarter.com/projects/adapteva/parallella-a-supercomputer-for-everyone
> has 28 hours to meet its funding goal, and the reward for a $99 pledge is a
> 2.1 by 3.4 inch board with a dual core ARM, 1 gig memory, an FPGA, a 16 core
> RISC multiprocessor, USB, HDMI, Ethernet, microSD, and GPIO.  25 GFLOPS at 5
> watts.  It's a long way from your typical embedded hardware, or your typical
> eLua target, but it's also a collection of capabilities that's likely to
> start turning up in embedded systems over the next decade.
>
> -- rec --
>
> _______________________________________________
> eLua-dev mailing list
> [hidden email]
> https://lists.berlios.de/mailman/listinfo/elua-dev
_______________________________________________
eLua-dev mailing list
[hidden email]
https://lists.berlios.de/mailman/listinfo/elua-dev
Roger Critchlow Roger Critchlow
Reply | Threaded
Open this post in threaded view
|

Re: off topic - parallella



On Fri, Oct 26, 2012 at 11:56 AM, Antti Lukats <[hidden email]> wrote:
Hi

some things that sound too good to be true are?

Yes, you always have to wonder about that, especially where the whole is less than the sum of the parts because the parts are mutually exclusive.

Thanks for the reality checks.  I don't have any info on zync-7010 existence or availability or pricing, or on the IO multiplexing issues you bring up, or on what sorts of cross-subsidy deals might be going on.  Or who owns stakes in Adapteva, for that matter, and what value they might place on successfully crowd funding the Epiphany retape.


Hm.. it would be REALLY REALLY rock if they can produce those boards
for 99USD.. !!

Yes, it would.  

But I will also now expect the after-kickstarter retail price to be more than 99USD, and various changes in functionality to reduce costs.


-- rec --

_______________________________________________
eLua-dev mailing list
[hidden email]
https://lists.berlios.de/mailman/listinfo/elua-dev
Martin Guy Martin Guy
Reply | Threaded
Open this post in threaded view
|

Re: off topic - parallella

On 26/10/2012, Roger Critchlow <[hidden email]> wrote:
> http://www.kickstarter.com/projects/adapteva/parallella-a-supercomputer-for-everyone

There's another factor: lots of wimpy cores doesn't equal a very fast computer.
On the one hand, you need to have a task that parallelises well and
uses all the cores with the same load on each, and even if you do have
a parallelisable task, often the total execution time for a job ends
up waiting for the slowest thread to complete, so still depends on the
speed of a single core.

And Lua only runs on a single thread anyway.

But for intellectual curiosity, see Google's paper on the subject:
"Brawny cores sill beat wimpy cores, most of the time"
http://research.google.com/pubs/archive/36448.pdf

    M
_______________________________________________
eLua-dev mailing list
[hidden email]
https://lists.berlios.de/mailman/listinfo/elua-dev
jbsnyder jbsnyder
Reply | Threaded
Open this post in threaded view
|

Re: off topic - parallella

On Sat, Oct 27, 2012 at 1:05 AM, martinwguy <[hidden email]> wrote:
> On 26/10/2012, Roger Critchlow <[hidden email]> wrote:
>> http://www.kickstarter.com/projects/adapteva/parallella-a-supercomputer-for-everyone
>
> There's another factor: lots of wimpy cores doesn't equal a very fast computer.
> On the one hand, you need to have a task that parallelises well and
> uses all the cores with the same load on each, and even if you do have
> a parallelisable task, often the total execution time for a job ends
> up waiting for the slowest thread to complete, so still depends on the
> speed of a single core.

As I think anyone whose written CUDA or OpenCL code can tell you,
parallel computing is not a panacea.  The task has to be well suited
to get decent performance (usually where you can fit the work for a
given core in it's local memory and not hit global memory
frequently/also play nice with other's global access). The nature of
the individual cores limits what's efficient to do with them. They
have limited instruction sets, and at least historically with GPUs
when you have floating point it's not double precision.  The epiphany
seems to have this particular limitation as well:

"Double-precision  floating-point arithmetic is emulated using
software libraries and should be
avoided if performance considerations outweigh the need for additional
precision."
-- http://www.adapteva.com/wp-content/uploads/2012/10/epiphany_arch_reference_3.12.10.03.pdf

Also, a kernel and processing model that is highly efficient on one
architecture of these platforms is not necessarily efficient on
another (say even nVIDIA <-> ATI), you really need to pay attention to
the architecture.

That said, their proposed price point for a self-contained unit with
the specifications they're indicating sounds quite impressive to me,
and I think there are a lot of interesting potential applications for
it. If you're wanting to play with the programming model for something
like this, you should be able to use OpenCL, CUDA or DirectCompute on
your desktop if you have a recent-enough GPU (nVIDIA & ATI have
shipped support for some time). Just be prepared to shift your
programming paradigm to an entirely different perspective :-)  Also,
if you do numerical computing there are some nice wrappers for things
like Python to do numerical computing on these platforms:

http://mathema.tician.de/software/pycuda
http://mathema.tician.de/software/pyopencl

>
> And Lua only runs on a single thread anyway.

Indeed. I could potentially see using something like this with Lua in
the way that pycuda/pyopencl might work (send out chunks of work to
kernels running on a GPU/other highly parallel minimalist core
architecture), but this machine runs a full Linux, and one might as
well take advantage of existing scripting or low level language
support I think.

That said, while I can't think of any practical use, it would be quite
neat if you could fit a Lua interpreter in the local memory space of
an individual core :-)

>
> But for intellectual curiosity, see Google's paper on the subject:
> "Brawny cores sill beat wimpy cores, most of the time"
> http://research.google.com/pubs/archive/36448.pdf
>
>     M
> _______________________________________________
> eLua-dev mailing list
> [hidden email]
> https://lists.berlios.de/mailman/listinfo/elua-dev



--
James Snyder
Biomedical Engineering
Northwestern University
http://fanplastic.org/key.txt
ph: (847) 448-0386
_______________________________________________
eLua-dev mailing list
[hidden email]
https://lists.berlios.de/mailman/listinfo/elua-dev