6

I have seen bunch of articles saying how raspberries could be joined to make a cluster. I basically am a 3d artist type thing and you know rendering a 10 second animation can take hours.

So if I make a cluster of about 20 or more raspberries with each having 1 GB RAM will the end result have 20 GB RAM? I just want to know what will be the role of cluster regarding the RAM? I am concerned about RAM because that's what troubles me... I don't concentrate on the processor but RAM.

Thanks for your patience P.S. My first question here.

Rehan Ullah
  • 163
  • 1
  • 1
  • 5

6 Answers6

23

The general consensus is that clusters are a waste of bandwidth. Yes, your cluster will have access to the sum of all the processing power and RAM, but you are introducing network latency into your performance equation. If you are focused more on RAM than CPU, you could build a RAM-heavy desktop for the same price as your Pi cluster. You mentioned 20 Rpi2 devices for your cluster. 20 x $35 = $700. If you go the AMD route (less expensive for the same performance level as Intel), you could build a desktop with 32 GB or RAM for that same dollar amount.

Also, the RAM on the RPi (LPDDR2) is running at 400 MHz and can be accessed at a rate of 800 MT/s, whereas a (AMD-based) desktop uses RAM (DDR3) that runs at 1066 MHz and can be accessed at a rate of 2133 MT/s, about 2.5 times faster.

All things considered, yes building a cluster of Pi's is a cool project. But if your aim is to access better performance, a desktop with better specs is a better solution.

tlhIngan
  • 3,372
  • 5
  • 21
  • 33
6

Short answer: probably

It really depends on whether or not the process is able to be parallelized. Some processes just can't be split among the RPi's and therefore would not have any benefit from a cluster. But, rendering animations sounds like a task that would be able to be split up and therefore would benefit from a cluster.

@Thingian said that it introduces a lot of network latency, this is true however i don't know to much about rendering but once again i think that this would effect it little as when rendering the different process probably dont need to "talk" with one another all that much.

If you would like some more insight into this I'd recommend you use this question and this related forum thread from the official RPi forum (though this has less to with graphics and more with general clustering) as well as How do I build a cluster?

If you'd like to buy the setup with minimal amounts of work on your part Iden .inc http://idein.jp is building a board and that would make it easier for you to connect 16 RPi zeros and it would probably take care of the connections and make you desk look a little less like a rat nest (IF you can find the zeros as they are extremely scarce right now)

sir_ian
  • 980
  • 4
  • 17
  • 38
6

Probably not. There's a few issues here.

The raspberry pi runs the ARM arcitecture, and I've never seen rendering software that runs on it. The best renderfarm is useless if your software won't work.

While pricier, x86 has better single threaded support, available software. While the on die ram might have lower latency, more and faster ram might be handy.

"So if I make a cluster of about 20 or more raspberries with each having 1 GB RAM will the end result have 20 GB RAM?"

No, you would run X threads on a system each doing part of a task, with Y ram. So you could set up your render manager to do 4 tasks with up to 512mb of ram each, and split a render over many systems handling one frame each.

I'd start with the software. Check what it'll run in. No point building a raspberry pi cluster with software that only works on x86, and you might end up going with a proper PC and a video card if GPU acceleration gives you good results with your specific software. My previous job swore by many many x86 cores so my answer reflects that.

As for hardware, I think the "scooter computer" Jeff Atwood wrote about would be a good baseline. You could go even cheaper if you wanted to sacrifice some performance for cost

350 usd (or 10 pis) gets you

i5-5200 Broadwell 2 core / 4 thread CPU at 2.2 Ghz - 2.7 Ghz
16GB DDR3 RAM
128GB M.2 SSD
Dual gigabit Realtek 8168 ethernet
front 4 USB 3.0 ports / rear 4 USB 2.0 ports
Dual HDMI out

You'd get more than 10x the ram, a faster x86 core with HT.

You don't get a crappy 100mbps ethernet connection bottlenecked by USB

You get reasonably fast onboard storage (which would also be nice if you needed more swap).

You get less threads but with better singlethreaded performance (which is nice anyway!).

I've also personally had issues with rpi installs failing, and well, these have actual hard drives (well SSDs) not slow SD cards and would be more reliable.

Looking at all this, the pi cluster would be a terrible option compared to one decent low end machine.

3

Of course not! Each node in your cluster needs to be able to load all of the textures / geometry etc. So it would limit the total size of your source data to (much less than) 1GB, but 20 copies.

Instead, consider renting an EC2 instance, on demand : https://aws.amazon.com/ec2/pricing

For example, a c3.8xlarge at $1.68 per hour will render much faster than a cluster of PIs, plus be easier to configure and setup.

(Depending on your location, that's probably in the same ballpark as the electricity to run 20x PI.)

Agate
  • 31
  • 1
2

If the speed of the new pi3 is such (looking at mips reports) that it takes about 26 of them to equal one Haswel Xeon or i7, I conclude that it's cheaper to use desktop processors. My desktop has 32GB of RAM, so that's more than you get from 26 1GB nodes, and you need less since the code doesn't need to be duplicated 26 times.

For the clusters I've seen using older pis, it would take 4× as many! I think that's the case for the pi-zero as well. So pointless for actually using, but a cheap way to have a platform for testing clustering software so it really is a cluster.

JDługosz
  • 287
  • 2
  • 3
  • 10
2

To be honest, it depends on what you are computing. Raspberry Pi are made to be versatile, and do a lot of different things. The IoT, personal computers, supercomputers, servers, etc.

If you cluster, you increase the power of your setup with Pi. there are supercomputers built to hash and process data, built out of pi. there are way more powerful gpu setups, that will process graphics, and big data as well.

Take for instance, cloud computing, and understand, that you can essentially, create clusters and supercomputers, within cloud framework.

then you should understand, that adding GPUs on google cloud, AWS, AZURE, or Bluemix, increases the price of your running instance.

Many times, it's as expensive, if not way more expensive, just to add a GPU instance.

In google cloud for instance, you can have up to 8 gpu instances, for an 8 core VM instance.

Now, take all the dough you would spend to not only purchase all those raspberry pi, and all the cost of electricity, and understand that you are probably in most circumstances, better off running 1 raspberry pi. and then just using that raspberry pi, to connect to cloud compute services.

there are demos to try out of cloud computing services, but pretty much, none of them will permit you to try out GPU instances on demo accounts.

SO I would just use a raspberry pi, and run ubuntu mate, and just connect to IBM bluemix, and or google cloud. in order to create clusters.

the only thing that bites with that, is that app development in the cloud sucks, if you need to run xcode, because you can dream on, finding a damn MacOS image, for the cloud, without purchasing your own, to upload to VMs.

unless you are creating some sort of robotic cluster, that is motorized, for physical display purposes.

that's my 2 cents.

nicholas
  • 21
  • 1