Win a copy of Modern JavaScript for the Impatient this week in the Server-Side JavaScript and NodeJS forum!

Gary W. Lucas

Ranch Hand
+ Follow
since Jun 25, 2014
Cows and Likes
Total received
In last 30 days
Total given
Total received
Received in last 30 days
Total given
Given in last 30 days
Forums and Threads
Scavenger Hunt
expand Ranch Hand Scavenger Hunt
expand Greenhorn Scavenger Hunt

Recent posts by Gary W. Lucas

Thanks for your reply.  I think your advice will be useful in my investigations.  I'm thinking about a mobile platform that starts out with some basic behaviors, but learns to optimize them over time.  For example, if I were implementing a walker (and that's just an example, I'm not that good), it would start off with some basic gaits pre-programmed into the system, but would gradually improve them based on experience negotiating its environment.   So that seems to fit the pattern you suggested with the Training and Inference areas.

I look forward to reading your book and, no doubt, significantly revising some of my ideas.

I have been looking into machine learning applications for small mobile robot applications.  Do you recommendations on the best way to apply deep reinforcement learning techniques with more modest processors?  

I see that you discuss a bi-pedal walker in your Appendix B.  I'm particularly looking forward to reading about that one.

Thanks.  And good luck with you book!

Good point on the determinant.  It's kind of a reminder that, if the three points define a valid triangle, they can also be used to construct the axes for a 3D coordinate system.

Incidentally, the area computation I posted earlier is algebraically equivalent to the determinant (with an appropriate assignment of variables to the vertices).
4 months ago

One way to check the correctness of the input vertices is to compute the area of the triangle:

   double h = (c.y - a.y) * (b.x - a.x) - (c.x - a.x) * (b.y - a.y);
   area = h/2

This calculation will produce a positive area if the vertices are given in counter-clockwise order, a negative area if they are given in clockwise order.  But if the inputs are bad, it will give a value very close to zero.  So I suggest something like:

      throw new IllegalArgumentException("Vertices are colinear or specify a degenerate triangle");

As noted in the above post, IllegalArgumentException is an appropriate exception.   Throwing an Error is seldom a good idea.

Also, a warning about something that might cause you trouble later on...  Your rotation code would rotate the triangle about the origin.  For graphics purposes, this is fine if the coordinates of the triangle are close to the origin. But  if the triangle is not close to the origin, you could end up rotating it right out of your display area.  I have no idea what problem you're trying to solve here, but keep this in mind as a potential cause if you're debugging and get a result you don't understand.

4 months ago
I recently posted a new open-source software project that may be of interest to the
visitors at this forum, particularly those working in Java.

The Gridfour Project is an attempt to create and distribute software tools for
processing raster (grid) based data. Potential applications include geophysical,
scientific, and engineering data. Eventually, I hope the project will provide tools
for rendering, data compression, contouring, surface analysis, and so forth.

The first module written for the Gridfour project is a file-backed data store
that is intended to assist applications in building and managing collections
of raster data that are too large to be kept in memory. I also posted some B-Spline
interpolation utilities and am working on some rendering tools.

You can visit the Gridfour Project at
I've also got a Wiki page at

Thanks for your time and attention.  I wish you all the best of luck on
your own software endeavors.

5 months ago
I implemented a Huffman tree just a few weeks ago, so fresh off my own experience (which may different than yours), may I suggest a good old-fashioned array?

Before I start, I will point out that for my application, I had a reasonably small symbol set (256 symbols... many implementations add a special symbol for "end-of-text").  If yours is much larger, my suggestion might be less effective.  Also, I am assuming you are thinking of a classic Huffman tree rather than an adaptive tree (such as Vitter's algorithm).

From the perspective of a tree, we recall that there are two kinds of nodes:  branches and leafs.     I used the same class for both, though I certainly could have used two different classes.   The node class had the rough form:

I declared two arrays.  The first was an array of leaf-node objects, one for each symbol.  This array was used for counting and coding symbols.   The second was an array of the same objects (e.g. it was a shallow copy of the first).  This array was used for sorting leafs by their count and building the Huffman tree.

I looped through my input text symbol-by-symbol.  Since my text consisted of bytes, the byte value could be used to map to an array index.  Using this index to access the proper leaf node from the primary array, I incremented its count.  When I was finished counting symbols, I sorted the second array in ascending order of counts (again, it contained the same node objects as the primary...  it just put them in a different order, the primary array was unchanged).  I removed zero-count elements. Then I applied Huffman's classic algorithm to build up the tree a pair of nodes at a time.  At this point, the second (sorted) array could be discarded).

To encode the text, I again made a pass through the symbols using their byte value as an index into the master array of leaf nodes.  Once I had the leaf, I traversed up the tree from bottom to top.  Of course, that direction of traversal was just the opposite of the bit order for the Huffman code (which goes from top to bottom/root to leaf), so I needed a good way to reverse them.    This might be where using an interesting collection class might come into play.  I used a Java Deque, treating it as a classic stack structure.  As I traversed up the tree (from child to parent), I put each node on the stack.  Once I reached the root node, I simply popped the stack to get back the nodes in the proper order of encoding.

For decoding the Huffman message, I just used the tree structure directly.  It was very efficient.  Sometimes, rolling your own data structure is the best way to go.

Hope this helps.


10 months ago
First off, I'm really interested in seeing your book.  My own experience with graph theory is limited to the old-school stuff and I would very much like to learn more about how it relates to big data applications.

My question:  How do Apache Spark and Neo4j hold up in terms of performance and memory use compared to some of the non-Java/non-JVM API's?

I ask because in the past I've found that graph-related applications require a lot of objects (edges, nodes, etc.) and that the overhead for Java object construction and memory use gets to be a problem.  Seeing how successful Apache Spark is, it seems likely that they've addressed the issue. I am curious as to what your experience has been.

Best of luck on your book.



Thank you again for the insights your provided about Maven.  You helped clarify a number of issues that had confused me.

1 year ago
Thanks Tim!

Recognizing that the Tinfour project has a relatively narrow user base, I tried to provide enough information to make it accessible to those developers who would be interested in using it.

When you refer to "pretty" documentation,  I take it you saw the project wiki pages?  Since Triangulated Irregular Networks (TINs) are kind of a specialized topic, I felt it would help potential developers if I provided background information on some of the theory and applications. So I've started posting wiki pages.  Some of them are better than others, but my favorite would have to be the one on Robin Sibson's Natural Neighbor Interpolation algorithm.  I think that Natural Neighbors is one of the most under-appreciated interpolation techniques out there.  Sibson died just a couple of years ago, so I have no idea what he would have thought of my work.  But I think that anything I can do to make the Natural Neighbors technique more available to potential applications is effort well spent.  
1 year ago

Thanks for the assessment and for taking the time to put together a Pull Request.  I will look at it this weekend.    I spent a lot of time looking at the documentation, but I came away with the feeling that there are fundamental Maven use patterns  that I am just not seeing.  So your updated POMs are especially welcome.  

You caught one issue in particular that concerns me. The core module is not supposed to have dependencies on anything except the standard Java API.  I'll have to figure out how the commons-math3 dependency  got in there.

1 year ago
Thanks again to the forum members who provided answers and useful advice to my original post.  My open-source project has now completed the initial transition to a Maven build environment.   I've also posted compiled Jar files on the Maven Central Repository.

Anyone would look like to see how it turned out, may do so by looking at

I've got to say that there was a lot of learning curve on this.  And I steadfastly hope nobody tries to use my project as an example when doing their own work...  

If anyone spots something that needs fixing or improvement, I welcome your suggestions.


1 year ago

Thanks again.   I will follow your advice add a comment to my pom.xml indicating the version of maven I use.

I've tried to keep my project organization simple and non-innovative as possible, so hopefully maven versions will never be a problem.

1 year ago

Thanks again for the suggestion.  I've picked up a couple of important nuances from your examples. For instance, I'm using macro's such as plugin.version through the tree.

A question.  Following an example in the Apache Maven "getting started" page, I used a maven artifact to automatically generate an example pom.  When I did, it included a substantial number of entries under "pluginManagement".  The generated pom,xml includes the comment "lock down plugins versions to avoid using Maven defaults".  But it adds a lot of ugly clutter to the pom and, having looked at a bunch of pom.xml files in various projects,  it seems to me that most developers leave that out.  I also see that you don't use it in your own "Evolution" pom.  Is it a good idea to include the plugin version specifications?  I'd prefer to leave it out because the extra specifications degrade the signal-to-noise ratio of the specification in terms of a human trying to understand the structure of the pom...  though if it's necessary as a coding practice, that would be more important than readability.
1 year ago
Well, I wish I'd seen this discussion back when your originally posted it.  I regret that I couldn't reply sooner.  

I encountered this problem in my own Delaunay Triangulation (triangle mesh) implementation, including the special case of a triangle with a fictitious vertex.

I think your best solution would be to simply implement your own contains method from scratch.   First off, none of the standard polygon classes are going to understand the idea of a fictitious vertex (for everyone else: triangles with fictitious vertices are often used in Delaunay Triangulations to handle the region outside the bounds of the triangle mesh).   Second off, because the triangle has special properties (it's always a convex polygon, it has a fixed number of sides), it's easy to code a custom contains() method.  For example,  simply test a point against each edge to see if it is on the left or right side of the edge.  If you know that the triangle is oriented counterclockwise, then an inside point must lie to the left of each edge (always left or always right, a mix means the point is exterior).  If the triangle is clockwise, the point must be to the right.  And if you don't know the orientation, just recall that the inside point must be on the same side of each edge.  In the case of your fictitious triangle test, you would perform the left/right-side test for the true edge and then try dropping the point down to the edge line to see if lies within the segment.  Of course, there are complicating factors in this because an exterior point could lie in the pie-shaped wedge between two exterior border segments.

If you look at the GeometricOperations class in the Tinfour Project you will find some code related to this computation.  In Tinfour, all the triangles are always oriented counterclockwise, so you would have to make adjustments for your own logic.
1 year ago

Thanks.  I'll take a look.

I figure that if I can find three or four different projects, they'll pretty much cover all the variations.

1 year ago