• Post Reply Bookmark Topic Watch Topic
  • New Topic
programming forums Java Mobile Certification Databases Caching Books Engineering Micro Controllers OS Languages Paradigms IDEs Build Tools Frameworks Application Servers Open Source This Site Careers Other Pie Elite all forums
this forum made possible by our volunteer staff, including ...
Marshals:
  • Campbell Ritchie
  • Ron McLeod
  • Rob Spoor
  • Tim Cooke
  • Junilu Lacar
Sheriffs:
  • Henry Wong
  • Liutauras Vilda
  • Jeanne Boyarsky
Saloon Keepers:
  • Jesse Silverman
  • Tim Holloway
  • Stephan van Hulst
  • Tim Moores
  • Carey Brown
Bartenders:
  • Al Hobbs
  • Mikalai Zaikin
  • Piet Souris

Caching data for speed of access across docker containers

 
Ranch Hand
Posts: 680
1
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
In a more monolithic application, you can have a data cache for speed of access of data. So you can have three services withing the application reading and writing to the same caches.

But if I break it down into thee services. For a CurrencyExchange application I have say a client service , that contains client funds. I have a second service that reads in and stores currenyExchanges for clients. And I have a 3rd service that does the matching.

In a monolithic application I would have a cache of clientFunds and a cache of currenyExchange. The data would be persisted to a database or to the cloud. But there is no need to access a data storage apart from start up.

But with the three microservices, It looks that everytime the matching service is run it will have to load the currenyExchanges and ClientFunds from storage into a local cache for the lifetime of that container.

I have read that there is the idea of volumes, volumes is to persist data but can it be used as a cache, is reading and writing to a volume faster than reading and writing to another storage
 
Saloon Keeper
Posts: 24287
167
Android Eclipse IDE Tomcat Server Redhat Java Linux
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
Docker Volumes are an abstraction that make persistent storage more flexible. Although by default what they actually do is map a directory within Docker's support directories to a Docker volume ID, you can alternatively map a Docker volume ID to an actual filesystem path or alternative filesystem handler.

A currency exchange system would probably not contain enough data to make it worth caching on disk as opposed to keeping tables in memory in the exchange-rate container. You could use a Docker volume as long-term storage, but remember that the OS itself tends to cache frequently-used disk data in RAM transparently and automatically.

A container has its own internal filesystem, so optionally also you could simply create a file entirely internal to the container instance. Note that in such a case, a restart of that container instance would preserve updates to that file, but a cold start (from the base image) would not - unless you took a snapshot of the instance and used that.
 
Tony Evans
Ranch Hand
Posts: 680
1
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
Hi

Thanks for the reply, I was using currency Exchange as an example but it could be anything that needs to be stored and searched . Take the following scenario, I have huge amounts of data coming in daily, that I also need to search .
In the old Monolothic, I would store the data in a  database. But to speed up searching and i would store in a serachable format in  memory in a cache lets say a static hashmap of data.

In the monolithic i have a service that writes the data to the search cache then the database, and another service that carries out operations on the search cache.

Now lets say I am using docker with Kafka, now using partitions. I can have faster writes to the DB or to a Cloud Storage with multiple consumers as images.  

Then I have another image that does the searching.

I am trying to think the best architecture.

1) Every search loads data from the repository into local memory on cache
2) Every time data is loaded onto the repo, a trigger will update data to the search image which has a local search cache
3) A memory cache set up that the write images can write to, and a search cache can read from.
4) Or does the speed of getting data from the cloud make the idea of caches redundent.

What I am trying to do is move from a monolithic way of thinking about architecture to a more micros service way of thinking about architecture.

Thanks



 
Tim Holloway
Saloon Keeper
Posts: 24287
167
Android Eclipse IDE Tomcat Server Redhat Java Linux
  • Likes 1
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
One of the things you might consider is to use a dedicated cache server such as redis. You'll find pre-built container images for stuff like that. Just wire them into your container set!
 
Tony Evans
Ranch Hand
Posts: 680
1
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
I  think this is what I am thinking about this creates a shared cache using Hazelcast

https://hazelcast.com/blog/creating-hazelcast-cluster-docker-cloud/
 
Tony Evans
Ranch Hand
Posts: 680
1
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
Thanks yes Redis is another cache that docker uses. Thanks very much much. I was thinking it would be set up as a seperate image, and Hazlecast and Redis do that.
 
Tim Holloway
Saloon Keeper
Posts: 24287
167
Android Eclipse IDE Tomcat Server Redhat Java Linux
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
It's not a "cache that Docker uses" anymore than "mysql is a database that Docker uses". In both cases, the servers in question are stand-alone Docker containers in their own right, designed so that other containers can make use of them, originally via Docker links, and more recently via Docker private networking. Or, for that matter host-based networking. They can use Docker Volumes as needed for persistent storage, or the old-fashioned Docker mounts.

As far as I know, both docker/redis and docker/mysql were developed and are supported by third parties and in any case are not part of Docker itself.
 
Tony Evans
Ranch Hand
Posts: 680
1
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
I looked at REDIS

what it does is loads it into docker as an image. Think I an getting to grips with how it works, 3rd partys are now loaded as images, for your own code to work with.
 
Tim Holloway
Saloon Keeper
Posts: 24287
167
Android Eclipse IDE Tomcat Server Redhat Java Linux
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
Think of them as spools in a TinkerToyâ„¢ set. You wire them together with "sticks" (virtual LAN connections) to build a whole that is greater than its parts using small, general-purpose parts in place of one big ugly inflexible part.

Aside from just generally simplifying installation and maintenance of complex systems, you also have better options if you need to go elastic.
 
Run away! Run away! Here, take this tiny ad with you:
Thread Boost feature
https://coderanch.com/t/674455/Thread-Boost-feature
reply
    Bookmark Topic Watch Topic
  • New Topic