• Post Reply Bookmark Topic Watch Topic
  • New Topic
programming forums Java Mobile Certification Databases Caching Books Engineering Micro Controllers OS Languages Paradigms IDEs Build Tools Frameworks Application Servers Open Source This Site Careers Other Pie Elite all forums
this forum made possible by our volunteer staff, including ...
Marshals:
  • Campbell Ritchie
  • Jeanne Boyarsky
  • Ron McLeod
  • Paul Clapham
  • Liutauras Vilda
Sheriffs:
  • paul wheaton
  • Rob Spoor
  • Devaka Cooray
Saloon Keepers:
  • Stephan van Hulst
  • Tim Holloway
  • Carey Brown
  • Frits Walraven
  • Tim Moores
Bartenders:
  • Mikalai Zaikin

Setting up multiple clusters

 
Greenhorn
Posts: 3
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
Hi all,

I've followed Michael Noll's excellent guide to get Hadoop set up on a single cluster: http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-single-node-cluster/

I'd now like to get a second node set up to do some processing. In this guide, and all others I've seen, these other clusters are all on the same LAN. However, I am doing my project using a Virtual Machine that I was given by my university that I access using PuTTY by connecting to its IP address. Can I connect this machine, across Hadoop, to a different machine on an entirely different IP address? Is this even possible? I tried editing the /etc/hosts of both computers and storing the IP address like this:

117.118.45.205:127.0.0.1 localhost

With the real IP address first, then the "local" IP address after, but that didn't work. Does anybody have any suggestions, or is this simply not possible?
 
Don't get me started about those stupid light bulbs.
reply
    Bookmark Topic Watch Topic
  • New Topic