Thom Parkin

Video Author
+ Follow
since Jan 16, 2015
Merit badge: grant badges
For More
Cows and Likes
Cows
Total received
0
In last 30 days
0
Total given
0
Likes
Total received
0
Received in last 30 days
0
Total given
1
Given in last 30 days
0
Forums and Threads
Scavenger Hunt
expand Ranch Hand Scavenger Hunt
expand Greenhorn Scavenger Hunt

Recent posts by Thom Parkin

Stephan Hochhaus wrote:Now to explain how data is exchanged we need to take into account that Meteor consists of various pieces, most importantly a standard protocol named DDP that allows bi-directional communication between server and client via WebSockets (SockJS) and a pub/sub approach that keeps track of both subscriptions and data changes (Livequery).
The first request to a Meteor app is to get the application to the browser via HTTP. Once all resources are transferred server and client only communicate via WebSockets. Since inside the browser an application is running it can turn raw data into HTML easily.

The client app subscribes to certain data collections and the Livequery component constantly monitors changes to the data source as well as keeps track of all subscribed clients. It pro-actively pushed changes out to all connected and currently subscribed clients over the DDP connection (WebSockets).


This is a great explanation. Thanks.
But, being "an old-timer" I can't help but think how terrible inefficient this is.
First, the ENTIRE application must be downloaded to the client on the initial request. Am I wrong about that?
And, of course, you do not want ALL the [for lack of a better term] "back-end" data on the client (security is one very BIG reason that comes to mind) so there is no real savings; you are not eliminating trips to the server. Are they minimized by such a great extent it makes a difference?

I remember the days of LiveScript and cannot quite shake the idea that Javascript belongs only on the client and it is very slow (I know this is an outdated and incorrect assumption).

Greg Nitrous wrote:Hello from team Nitrous!


Hey Greg. Thanks for joining the discussion.

I am a HUGE fan of Nitrous.IO and want to strongly encourage anyone reading this to go check it out.

All the exercises in Mastering Git[Video] were developed, tested and executed using Nitrous.IO and you can register for their FREE plan to complete the entire course without installing ANYTHING on your local computer.
Hi Caroline. Thanks for posting the question.

There is no simple answer to your broad question and this forum does not provide sufficient space to offer a detailed reply.
I can, however, offer a few excellent resources:

[*] Code School has a great little module to get you started
[*] The Pro-Git book is in its Second Edition and has become the de-facto BIBLE
[*] I am obliged to point out that Packt Publishing has some excellent resources on the subject

And, in the vein of "Shameless Self-Promotion", there is an e-book I created for Learnable (I think it is free) that was intended to introduce ANYONE (even non-developers) to using Git

I strongly encourage you to take the time to understand Git. It may seem a bit strange and archaic at first. Actually, it IS a bit strange and archaic. But there is an amazing wealth of information about Git (search YouTube for presentations by the training team at GitHub) and plenty of projects in Open Source for you to use to practice!

Best of luck in your pursuit of a Computer Science degree.

Thom
One of the fundamental differences between SVN and Git is that Subversion is Centralized. That means there must be a centralized master authority with respect to the codebase.
As you are seeing while you work on Open Source projects, a Distributed SCM - like Git - does not necessarily require one centralize repository. Each user has a complete copy of the entire history and can work independently. Changes can be later merged into those of another developer (GitHub uses the Pull Request as a means to 'request' your changes are included but provides the maintainer ultimate purview over the project). This can even be from one machine to another! Changes (in the form of patches) can be sent via email to be merged into a copy of the Git repository held by another developer.

So, overall the philosophy is "I can work on MY code, protect MY code from others, share MY code as I desire" without the necessity of connection to a master centralized data store. In practice, particularly in the Open Source world, there is a repository understood to be that MASTER. And Git supports that concept easily. This is what has made GitHub so large.

I may not have answered your question directly - about Workflow and Use Cases - but I hope this provides a clearer perspective on how Git "thinks". There may be others here who can offer Case-History stories about particular workflow(s).
To me, the major difference between Git and SVN is philosophical. Whereas Subversion essentially stores a whole set of changed files - and applies an arbitrary version number to that set - the essence of Git is in its idea of only storing changes.
So it is a difference between making an entire copy and keeping track of only the Delta. For this reason merging and branching is easier because git has the ability to "reassemble" the series of changes that led to any particular state (commit; a point in time) of the project.

Additionally, I have a distaste for the way in which SVN leaves those 'tracking' files (hidden ".svn") all over the filesystem as a means to its internal housekeeping.