Tim Holloway

Bartender
+ Follow
since Jun 25, 2001
Tim likes ...
Android Eclipse IDE Java Linux Redhat Tomcat Server
Forum Moderator
Long-time moderator for the Tomcat and JavaServer Faces forums. Designer and manager for the mousetech.com enterprise server farm, which runs VMs, a private cloud and a whole raft of Docker containers.
These days, doing a lot of IoT stuff with Arduinos and Raspberry Pi's.
Jacksonville, Florida USA
Cows and Likes
Cows
Total received
124
In last 30 days
2
Total given
19
Likes
Total received
1587
Received in last 30 days
20
Total given
88
Given in last 30 days
1
Forums and Threads
Scavenger Hunt
expand Rancher Scavenger Hunt
expand Ranch Hand Scavenger Hunt
expand Greenhorn Scavenger Hunt

Recent posts by Tim Holloway

Please don't shout!

What do you not understand? Java Exceptions? They're covered in any decent textbook on Java. IllegalArgumentException? It's explained in the JavaDocs (https://docs.oracle.com/javase/7/docs/api/java/lang/IllegalArgumentException.html). And Carey's advice is worth noting. If you catch the exception and do a printStackTrace on it, it will provide a lot more information. For that matter, if you don't catch it at all, the app will exit and print a stack trace.
4 hours ago
Welcome to the Ranch! And thanks for trying to use the code tags. But they wrap around the stuff you want to stay properly formatted!

Incidentally, you can also use code tags for SQL, XML, Javascript, shell scripts and virtually anything else that you don't want our message editor to mangle.
4 hours ago
Welcome to the Ranch, Bastien!

I don't really understand that.

J2EE defines a very secure login process. In fact, as far as I know, it has never been subverted. On the other hand, I've spent a LONG time working with Tomcat, including some military and banking systems, and the one thing I've seen is that over 90% of them could be subverted by non-technical people in 15 minutes or less. So it makes sense to use Tomcat's implementation of the J2EE standard. Plus, you also get access to J2EE user identity and authorization API functions.

Basic authentication isn't considered as secure as form-based authentication, but the Tomcat security system doesn't care which one you choose - it's separate from the authenticator, which is implemented in a Tomcat plugin known as a Realm. The Realm API is very simple. It's 2 most essential methods are the authenticate method, which accepts 2 arguments (the userid and password from the login dialog or form) and returns a true/false response (or throws an Exception). In that way, password data doesn't leak into Tomcat or the webapps and thus would-be exploiters don't get handed free assistance. The other essential method creates an instance of a UserPrincipal object, which the Realm, Tomcat server and even (if you're careful) the application program can be used for realm-specific storage for that user.

Because the authenticate method is a simple yes/no interface, the Realm can be constructed to use a variety of backing services. There are Realm modules for XML files, JDBC databases, LDAP/Active Directory, Kerberos and more. And if you don't like the standard ones and need to, for example, authenticate against a custom web service, you can easily create your own Realm.

Unless you have some sort of really special need, I'd recommend using or creating a Realm over replacing the BasicAuthenticator class.
11 hours ago
I'm afraid you'll have to do your own work. I charge money for programming.

What I would do is sort out the regexes by the first digit of the credit card number and arrange them into groups. I think all your regexes  have a single unique first digit value.

Once I have them grouped, I could then code a switch statement based on that first digit and apply only the group that matched the first digit of the card number being tested. I would then sequentially try each pattern in the group until either I had a match or I ran out of patterns to test (no match).
13 hours ago
Welcome to the Ranch, Subash!

The message is pretty much what it says. The server did not return a response within the time allowed. That can happen if the process handling the request ran for too long, in which case the webapp needs optimizing.

You can also timeout due to a slow network - and a network is only as fast as its slowest segment.

And finally, you can timeout for the same reason I have had timeout problems recently. The network wasn't properly configured and the packets couldn't route freely in both directions. Took me forever to fix that.
13 hours ago
ftp/sftp require a userid/password to log into the server. An IP address/hostname is not sufficient.

The ftp host accound (userid) is a real Linux user with a real home directory (usually in a chroot jail). Since this directory normally belongs to the same userid that you logged in as, it would also normally have the ability to read and write to it.

As far as using the Windows File Explorer to do all that, I don't know. Normally I do my file copies with rsync or the scp command that's provided by the authors of the Windows PuTTY program suite. Actual sftp is something I haven't done in years with any OS.
13 hours ago
Much clearer.

Ordinarily, I'd recommend the OS native package installer, which is probably brew for MacOS, but if your intent is to run multiple servers on multiple versions, Stack Builder should be more flexible, if less targeted to the specific OS.

I'm assuming that you're basically trying to migrate from 2 earlier databases to a newer version. In which case, I'd generally deal with one at a time, starting with the oldest. And in an era of terabyte disks, I probably wouldn't actually go deleting/uninstalling until a given database had been fully migrated.

You're looking at 2 different types of backups. The "backup data" backup is a full backup. The "backup table structure" is a schema backup. There's a pg_dump option for that.

You should definitely backup to SQL, and preferably uncompressed SQL. It's not a bad idea to make an offline copy of the SQL just in case. PostgreSQL has had an infuriating history of major incompatibilities in their internal binary files even between minor releases, so using a regular brute-force disk backup utility isn't as safe as SQL. Their SQL is, for the most part, going to be backwards compatible, and indeed, I think there may even be an option to make it more compatible for restoring to non-PostgreSQL DBMS databases.

I'd do the backups first, and only start deleting servers if resources get critical. Then as each restore checks off, you can uninstall the old server. For extra paranoia, as I said, make offline copies of the backups and for good measure make note of which backups went with which server.

PostgreSQL supports named schemas. Normally a database is created with the schema name "default", but if you need to host 2 different versions of the same database at the same time - for example, to merge stuff between different generations of the database, then you can restore the backup with another schema name. You then have to qualify which schema name you're working with, but that can make things a lot simpler than something like messing around with the backup SQL.

And, of course, if you are going to do stuff like that, be sure and take backups periodically as you work through things so that if disaster strikes you won't have to start from the beginning again!

OK. You've confused me. Why should line 11 ever be executed before line 4?

Yes, modern compilers can do a lot of optimising and that includes moving code around. But the line is drawn at the source level. Regardless of what order things do internally, their execution order as statements is not allowed to be altered.

Furthermore, there are limits even within statements. Otherwise the "short circuit" operators couldn't work reliably.
Welcome to the Ranch, Nesamani!

You really don't want to combine all those regexes into one. Regexes are a mess individually. Aggregated, you are expanding the mess exponentially.

Besides, you have the opportunity with your current collection to both validate and determine what bank the card comes from. That's something you'll find hard to do with a composite regex.

I doubt you're going to see major performance gains from a composite regex, versus a set of individual regexes, if that's what you're looking for. If you have compiled the regexes (and you should!) then the compiled regex has been translated into an extremely efficient program run by the regular expression finite  state machine. It can rapidly scan for a match and will terminate immediately as soon as a character is encountered that does not conform.

So a very efficient algorithm could simply apply each regex to the object string and return (with the bank type) the first time a match occurs. You can further refine this by arranging the tests in a switch statement where each "case" clause handles a different first digit (one level is enough - the regexes are, as I said, very efficient, so we're just trying to reduce the number of regexes needed to get an answer). And, incidentally, I'd recommend using an enumerated value for the bank type rather than a literal string. But that's just tweaking.
14 hours ago
When an object is being constructed, it's still in the hands of the JVM's object factory and so the regular application threads can't access it.

Once constructed, however, the object is cast out into threadspace. At that point, if you want to be thread-safe, you have to code for thread safety.

If a constructor invokes a setter from within the constructor method, then the setter code is thread safe since the constructor is operating outside the application thread system. But that same setter will not be thread-safe during normal operation.

And as a side note, since a constructor can access other objects that are already participating in the application threadspace, it should realize that those objects could potentially do horrible things if, for example, the incomplete object is passed to them as a parameter. Then again, they could do horrible things to the object under construction even without threading.
Yep, that's a mess alright.

I'm presuming that you're running either Linux or MacOS. In which case, PostgreSQL is normally installed via the distro's package manager. There are several packages required for postgresql, depending on whether you want to be a postgres client, a postgres server, or both. And there are probably some common-code packages that are required regardless of whether you're a client or a server, but those usually get pulled in as depedencies.

So, for example, I have a machine set up as a client running CentOS 7. The packages that yum installed were:

postgresql-libs-9.2.24-1.el7_5.x86_64
postgresql-9.2.24-1.el7_5.x86_64

This is, of course, postgresql 9.2.

Under ordinary circumstances, you cannot have multiple versions of postgresql installed on one machine. Most people don't need to, and thus the package structure won't allow it. In any event, since the postgresql server is a network server, and one and ONLY one application can listen on port 5432, you cannot run multiple postgresql servers at one time withough a lot of extra trouble and that's beyond the level that I think you're at.

In fact, if you have been doing something awful like downloading and building PostgreSQL from source code (which, alas, is the first thing that a lot of popular Unix-style products tell you to do), then I recommend you dump all that and use apt, yum, dnf or whatever to install the official distro version of the PostgreSQL server for that machine.

To use the psql utility and also pg_dump, pg_restore and so forth, you may need to install a postgres-client package. Also, the postgresql JDBC driver often has a separate package of its own.

The PostgreSQL server (once known as the "postmaster") is expected to run as a daemon, so for modern Linux distros, you'd install the postgresql-server package and activate it using the following commands (as root):


That will ensure that the server starts and will automatically restart when you reboot.

Note that the names of services and packages may vary if you are using some other distro such as Ubuntu, but the basic concepts remain the same.

I'm not sure what you used to get the browser that shows all those servers. I mostly just run everything from the command line. Your final screen shot does seem to show a successful backup, though.

The directory structure is really weird, though. MacOS?

paul wheaton wrote:
How do you figure?



Because you want offline reading but not a download.
You seem to want to have your cake and eat it too. You want offline reading, but not a ".mobi" (obsolete) or PDF file. What difference does it make what the file format is, if it's something that nobody can read without custom software installed on their computer/tablet/phone?

About the only way I can think of to keep content on a server from being pulled down locally is to embed it in a video - which is a pretty awful way to read documents. That's because the video content providers have strong-armed the client (browser) suppliers into making their decoders into integral components of the client apps.

The Kindle and Nook apps have decryption abilities, but they do download. Although in the case of the Nook, even "DRM-free" files are DRM'd by the simple virtue of hiding them in a place where only rooting the OS can make them visible. Which is probably why there's no Nook For Linux.

Then again, that level of greed has made me stop buying books for Nook, DRM or no. If Packt or O'Reilly is willing to sell me a book and advertise it as freely transferable (subject to copyright), then bn.com shouldn't have the right to cancel that right unilaterally.

As I said elsewhere, the more you try to prevent theft, the more you attract thieves and repel honest customers and there's statistical evidence to back that up, but I understand that some people simply can't abide the thought of even a single copy of their product being pirated, so here's the best I can offer:

The DRM process used by Kindle, Nook, Overdrive, Adobe Digital Editions, etc. al. is not a one-time thing. The encryption is done on a per-client basis. And by per-client, I mean that if you have 6 Kindles and download to each, you'll end up with 6 different files. Because the identity of the client device is used as part of the input to the decryption process. Because of this, you can't do simple hosting with DRM, you have to use a host that applies DRM encryption as part of the service.

Basically, that leaves you with Amazon, Barnes & Noble, or Kobo (or maybe some lesser-known players). All of whom would probably be glad to provide those services - if you contract with them as a publisher.

You could, of course, set up your own encrypting host. But then you've have to supply your own reader app. And having multiple reader apps is about as appealing to me as the current trend of content providers to yank their property off Netflix and nickel-and-dime me with 110 paid service providers. That was a lot of what people hated about cable TV.
Do yourself a favor and don't mess directly with the postgresql files. Most of them aren't intended for human interaction and some can be for legacy reasons but are better manipulated via postgres itself. If you do need to do something special, the documentation will tell you.

The createdb program may have a different name depending on which OS distro you are using. I can use it for CentOS, but the problem with generic commands like that is that a lot of systems when you say "createdb", the question is what type of database? I mean, it could be a MySQL or SQLite database. So sometimes distro providers alter the names of utility programs to eliminate ambiguity.

In any event, you can't pg_dump a database that doesn't exist.

But a better solution than createdb is to use the psql command utility and issue the "CREATE DATABASE pg222;" DDL command. That works for most databases, and I just verified that PostgreSQL is included. In fact:

createdb man page wrote:
      createdb is a wrapper around the SQL command CREATE DATABASE (CREATE_DATABASE(7)). There is no effective difference between creating databases via this utility and via other methods for
      accessing the server.




Note however, that PostgreSQL is very paranoid about security. So if the user that attempts to create the database doesn't have database creation rights, the CREATE DATABASE command will be rejected. This can be avoided by running the psql command under the postgresql administrative userid - which is NOT the same thing as the OS administrative (root) userid. Usually the pg admin userid is something like 'postgres'.

If you do that, however, you'll need to grant rights to the userid who is supposed to actually own the database. The easiest way to to that is by appending the "WITH OWNER = my_user_id" to the CREATE DATABASE command. Use the psql command "\h create database" to see the full set of options.