Tim Holloway

Saloon Keeper
+ Follow
since Jun 25, 2001
Tim likes ...
Android Eclipse IDE Tomcat Server Redhat Java Linux
Long-time moderator for the Tomcat and JavaServer Faces forums. Designer and manager for the mousetech.com enterprise server farm, which runs VMs, a private cloud and a whole raft of Docker containers.
These days, doing a lot of IoT stuff with Arduinos and Raspberry Pi's.
Jacksonville, Florida USA
Cows and Likes
Cows
Total received
150
In last 30 days
1
Total given
26
Likes
Total received
1879
Received in last 30 days
25
Total given
141
Given in last 30 days
4
Forums and Threads
Scavenger Hunt
expand Rancher Scavenger Hunt
expand Ranch Hand Scavenger Hunt
expand Greenhorn Scavenger Hunt

Recent posts by Tim Holloway

Personally, I don't like "how to" guides that are full of snapshots of IDE operations. Not all of us use the same IDE - I'm not even sure which IDE was using, in fact. And IDEs are too often a crutch that keeps you from understanding what you are actually doing and why.

JSON, XML and YAML are all text-format serializations of 2-dimensional structures. Java, however, can digest such serializations and create and link JavaBeans (POJOs) from those serializations - provided you have suitable Java code. And since there are libraries for all of the above, you don't generally need to re-invent that part. What you end up with is in computer science terms known as a "graph", which is to say a network of interconnected objects, and has nothing to do with things like bar charts or the like.

To gain maximum utility, you can often annotate JavaBean class definitions to assist in the translation process. And, since these beans are preferably POJOs, you can even apply more than one set of annotations to the bean definitions, which is useful if an app needs to be able to read or output multiple formats.
In answer to your first question, most of those properties appear to be to tell Ant where Netbeans is and where its copy of Tomcat is. Which is mostly useless except with Netbeans. Looks like maybe Netbeans was expected to launch a test browser as well, but I'm not sure.

In answer to your second question, those Ant actions are to deploy the WAR that has been built to the Tomcat server. Maven has a "deploy" goal instead, so you don't need any of that, although the information regarding the server location, application context, and other variable data would have to be translated into POM equivalents for the deploy goal to use.

As an editorial opinion, I've never been big on making Ant or Maven deploy my webapps. For development and testing I have Tomcat set to use the WAR as it was built (withing the source project) instead of exporting it to somewhere else. And for production, I build a stand-alone deployable that generally has to be handed over to the operations staff for security reasons. Or, if Jenkins is in control, I let Jenkins handle that part.
3 hours ago
The first thing to consider is whether your RAM requirements are including running a database or if you're using a database on an external server. AWS does provide ready-to-use database server images. If you're running a database on the same machine as the web service, then the RAM requirements are going to be much higher, since most databases eat a lot of RAM themselves.

As far as telling your RAM requirements for your Spring Boot app, you can simply launch it and use your operating system utilities to find out how much RAM it is using. The "top" command works for Linux and other Unix-style OS's. In Windows, it's the Task Manager that will tell you.

Then realize that the JVM launches with certain pre-set memory allocations and that they may need to be adjusted. To tell how much RAM within the VM is being used, a good Java performance monitoring tool can be used.
3 hours ago
This is essentially a problem in graph theory. Each person is a node and the infections from person to person form a graph. The extent of the graph at a given point in time is a function of the number of connections (on average) between persons and the contagion rate (R0), which for COVID-19 is estimated at about 2 to 2.5.

In the early stages of spread, everyone is non-immune and so the maximum rate is seen. As the contagion spreads, some people will have already been exposed, so they won't count - you can only count them once (this is assuming that people get immune).

Eventually, virtually everyone who's exposed has already been exposed and the virus cannot expand further. This is the "herd immunity" effect that Sweden was hoping for. They didn't get, it, incidentally. So you have a geometric expansion at first, with a damping effect and the disease becomes widespread. This can be represented with a relatively simple formula. It won't be strictly accurate day-to-day because the spread is a statistical process, but it will be fairly accurate overall, given the right data.

Machine Learning, on the other hand, depends only on what it has seen. As I said, it cannot look ahead except as conditioned by past experience. So to an untutored machine, the infection plot would increase infinitely and not damp or show herd immunity. And unless you're training for overall timelines (multiple plagues) instead of a specific plague's future trends, the later data from when damping kicks in will distort the earlier projections.
In an open-ended environment with no alterations, you can do this sort of prediction - it's basically a curve-fitting operation. But using machine learning for that is overkill, since it's just as easy to use straight mathematics.

What machine learning cannot do is realize ahead of time when saturation occurs and what effect that will have on future growth. Because for most of us, learning is something based on the past, not on the unseen future.

The other thing that machine learning cannot do is anticipate changes in growth rate do to people adapting or to government policies. Or reactions to government policies.

A more practical approach would be to use ML to correlate the presence or absence (and severity) of lockdown orders against the natural growth rate. That, in fact is the sort of tool that can be used effectively to guide policies. At least in places where knowledge is valued more than reflex political response.
Certainly there is a simplicity about deploying containers. However, it's not that difficult to create custom VM images using cloud-init, Packer, Vagrant and Ansible.
Containers are lightweight virtualization. They share many of their resources with their hosts and in many cases you only have access to resource if you configure the container accordingly - for example, to mount data that's external to the container or gain access to network ports.

VMs, on the other hand, emulate virtually (sorry, no pun intended) all of the characteristics that a physical machine would have.

I run VMs for my master databases because they provide a central repository for all critical data (which makes it easier to backup, recover, tune, and maintain). And because I can let them own entire disk drives if I like. I also have dedicated VMs for certain systems that don't easily containerize. In many cases, however, the choice between a VM and a container is simply preference, and I've been migrating apps from VMs into containers for quite a while now.

I suppose my general rule is that if I need something that is complex and/or runs a heavy workload and/or is a central subsystem, I'll put it in a VM. If it's lightweight, needs to be elastic, and is pretty much self-contained, it's a candidate for containerization.
I think that there's a custom report designer/writer in the Pentaho suite from Hitachi. It's a lot like Crystal Reports to me, and since it uses XML as its report templating language, I can even do batch edits on it.

However, the real question is "what do you mean by 'greater'?" What do you need that you're not getting from the products that you've worked with so far?
1 day ago
This is a duplicate thread. It has been locked to eliminate confusion.
2 days ago
As I said, Tomcat doesn't do proxying. A VirtualHost simply defines an alternate domain name and set of webapps. The actual web applications for that domain, however, would have to be within the Tomcat server that defined the Virtual Host.

That's different from Apache HTTPD where an Apache Virtual Host can define a tunnel to reverse proxy to other servers (including one or more Tomcats).
2 days ago