Some people, when well-known sources tell them that fire will burn them, don't put their hands in the fire.
Some people, being skeptical, will put their hands in the fire, get burned, and learn not to put their hands in the fire.
And some people, believing that they know better than well-known sources, will claim it's a lie, put their hands in the fire, and continue to scream it's a lie even as their hands burn down to charred stumps.
Tim Holloway wrote:I'm curious why this special mod specifically for Tomcat. RAMDISKS have been a thing for a very long time now. Indeed, all the major Linux distros employ a RAMDISK as part of their secondary boot process. It loads in the live kernel and resources.
Then there's caching. Modern hard drives have multiple gigabytes of on-board cache. OS's have lots of filesystem cache - probably a third or more of the physical RAM on my machines is cache.
Web systems have cache of various forms, such as SQUID and REDIS.
So it would be interesting if you could show us some use cases where all that hard work gained you something significant (other than Tomcat expertise, of course), Benchmarks would be especially useful.
Oh, and since this is Tomcat-specific, I'm adding this thread to the Tomcat forum where the people most likely to benefit hang out.
Some people, when well-known sources tell them that fire will burn them, don't put their hands in the fire.
Some people, being skeptical, will put their hands in the fire, get burned, and learn not to put their hands in the fire.
And some people, believing that they know better than well-known sources, will claim it's a lie, put their hands in the fire, and continue to scream it's a lie even as their hands burn down to charred stumps.
Tim Holloway wrote:I don't know what you mean by "gb's disc (sic) cache". If you're refering to the physical cache on disk drives, They do, in fact come with significant RAM on the controller board, allowing the device to be complete write operations rapidly and to retrieve commonly-accessed data without having to wait on physical disk latency.
Squid is just one of many products that can cache static page responses in RAM when used as a proxy for Tomcat. There are also physical boxes that provide front-end caching, although unless you're a big IT shop, it's usually cheaper to use a software cache.
Some people, when well-known sources tell them that fire will burn them, don't put their hands in the fire.
Some people, being skeptical, will put their hands in the fire, get burned, and learn not to put their hands in the fire.
And some people, believing that they know better than well-known sources, will claim it's a lie, put their hands in the fire, and continue to scream it's a lie even as their hands burn down to charred stumps.
Tim Holloway wrote:Some pusillanimous pedantry:
A "disk" is a hard drive or other fixed media.
A "disc" is a CD, DVD or other removable media.
It's not logical, but that's the way it is.
Just to repeat, though, a data request isn't going to get anywhere near a disk drive's read/write heads unless the data could not be found in either the OS buffer cache or the physical drive's on-board RAM.
The primary use of cache software is to fine-tune the process. For example, if a front-end cache utility keeps a popular javascript or CSS file as compressed data in RAM, the act of locating it in cache, expanding it and transmitting it could well be much faster and more RAM-efficient that relying on the OS buffers. I was actually shown a case where that had been taken advantage of way back in the 1970's. And it wasn't even a very efficient compression scheme.
Some people, when well-known sources tell them that fire will burn them, don't put their hands in the fire.
Some people, being skeptical, will put their hands in the fire, get burned, and learn not to put their hands in the fire.
And some people, believing that they know better than well-known sources, will claim it's a lie, put their hands in the fire, and continue to scream it's a lie even as their hands burn down to charred stumps.
Tim Holloway wrote:I still don't understand why you're so adamant that the physical disk hardware is going to get involved. Modern OS's do actual physical disk I/O reads only as a last resort. If the data was recently referenced, the OS will pull it straight out of physical RAM. Not Virtual (disk-based) RAM, but actual physical RAM. No spin latency, no head activity. no grabbing the I/O buses. And that data is not discarded from RAM unless more recent requests crowd it out. My desktop is currently using over 1.5GB of buffer memory and I haven't even tuned it.
The best place to check for expired headers in any event is on the client side. Since a network request from the client generally has far more overhead and delay than a local disk request. To say nothing of eating into total network bandwidth. Regardless, protocol-aware caching proxies (like Squid) do pay attention to the headers. You can do a lot of fine-tuning on a product like squid if you need to. I use Squid as an example since I run it to cache OS update packages. Small packages cache in RAM, larger ones on disk, but regardless, I only have to pull the package once from the (slow) Internet, and thereafter all the proxied servers can get their OS updates via Gigabit internal networking. That's forward proxying, but Squid also does reverse proxying.
Not to shred your hard work, and I'll give you marks for digging into Tomcat/JEE, but to award major-league geek points, I'd want to see tangible benefits. That is, benchmarks and supporting documentation. The problem with what efficiency-gaining mechanisms are "supposed" to do is that I can personally attest - with support from other long-timers here - that the inefficiencies in a system are almost never where you "know" they're going to be. Time and again we've seen that when actual measurements are made that the real bottlenecks are somewhere else.
Just one example: Many years ago I was part of the OS support group for a large IBM mainframe. It started crashing every afternoon about 3 PM. If you have never worked with mainframes, mainframe computers are never supposed to crash. You don't just "turn a mainframe off and back on again". For one thing, system startups could take 15 minutes or more and the entire company could come to a halt. The cure wasn't in changing software in that particular case, just a single option switch that had to be set properly on one of the remote terminal systems.
Samuel Marchant wrote:
A server is not committed when it starts with the settings of the OS, as that being of reference to your "desktop and 1.5gb" that numeric is only relevant if you also give the total RAM size of the machine with it, I presume makes 1.5gb small and irrelevant.
Perhaps, but these days a 4GB Raspberry Pi runs $USD 75 and it's quite enough computer to run Tomcat. Meaning that a "desktop" could be dedicated solely to the database, if money is exceptionally tight. For myself, I recycle old Windows boxes.Samuel Marchant wrote:
Some of the smaller 2gb and 4gb desktops will be just fine for a Tomcat in someone's small business but may need the database with it.
I don't know what that means, but if it relates to the memory parameters, that limit applies only to 32-bit versions of Java. Which you shouldn't be running if you have more than 4GB of RAM anyway.Samuel Marchant wrote:
Tomcat itself only allows 1gb max RAM on one of its reserves on the command line the other two are smaller assignment again, so I don't see where it will get either RAM or bus and head time from for requests.
Here you've totally lost me.Samuel Marchant wrote:
Servers don't check clients cache they send initial data and caching headers for the output information, as of proxy cache , in http 1.1 if the cache is set private the proxy will have no contact with the client data and it's controls, for a proxy to cache the data must be set public (http 1.1 cache -control header) as every government that is not USA or Russia stipulated for internet cafes.
Some people, when well-known sources tell them that fire will burn them, don't put their hands in the fire.
Some people, being skeptical, will put their hands in the fire, get burned, and learn not to put their hands in the fire.
And some people, believing that they know better than well-known sources, will claim it's a lie, put their hands in the fire, and continue to scream it's a lie even as their hands burn down to charred stumps.
Some people, when well-known sources tell them that fire will burn them, don't put their hands in the fire.
Some people, being skeptical, will put their hands in the fire, get burned, and learn not to put their hands in the fire.
And some people, believing that they know better than well-known sources, will claim it's a lie, put their hands in the fire, and continue to scream it's a lie even as their hands burn down to charred stumps.
Some people, when well-known sources tell them that fire will burn them, don't put their hands in the fire.
Some people, being skeptical, will put their hands in the fire, get burned, and learn not to put their hands in the fire.
And some people, believing that they know better than well-known sources, will claim it's a lie, put their hands in the fire, and continue to scream it's a lie even as their hands burn down to charred stumps.
Samuel Marchant wrote:
Keeping it out of read/write heads on critical software such as query driven sites utilise is another
Some people, when well-known sources tell them that fire will burn them, don't put their hands in the fire.
Some people, being skeptical, will put their hands in the fire, get burned, and learn not to put their hands in the fire.
And some people, believing that they know better than well-known sources, will claim it's a lie, put their hands in the fire, and continue to scream it's a lie even as their hands burn down to charred stumps.
Tim Holloway wrote:The frustrating part is the repeated blind assertion that the read/write heads are being used to the point where it's essential to provide a new solution to avoid that. We've seen no concrete evidence that, considering how much work goes into avoiding physical I/O by the OS itself, that the read/write heads are really being used at all.
How many times have we told people in the optimization forum not to optimize based on what you "know", but to optimize based on what you measure? That's all I've ever asked for. Measurements. Because "Common sense" is just a way for the lazy to avoid thinking and it's frequently wrong.
Tim Moores wrote:Assertions are just that - assertions. Without corroborating evidence they're as likely to be false as to be true.
The part about the server API is irrelevant if there's no speedup. It doesn't fulfill a business need.
Samuel Marchant wrote:a proof that at present exists neither for or against anyhow and is extremely unlikely in any favour against my code.
Samuel Marchant wrote:Want a piece of proof, Java cannot sell sell because of the fault it has if you were right in your statements against JVM efficiency by the mechanism it allows in the process used in the code.
Some people, when well-known sources tell them that fire will burn them, don't put their hands in the fire.
Some people, being skeptical, will put their hands in the fire, get burned, and learn not to put their hands in the fire.
And some people, believing that they know better than well-known sources, will claim it's a lie, put their hands in the fire, and continue to scream it's a lie even as their hands burn down to charred stumps.
Tim Holloway wrote:
If the instructions on using it as as nonsensical as this, then nobody is going to be able to use this.
This isn't a phone's spelling corrector at work, these words are random gibberish and make no sense whatever.
Please find someone with better English skills to help you.
And forget about attempting to define reality by brute force of "it's true because I say it's true." Someone already has that job in this country.
They gave me pumpkin ice cream. It was not pumpkin pie ice cream. Wiping my tongue on this tiny ad:
SKIP - a book about connecting industrious people with elderly land owners
https://coderanch.com/t/skip-book
|