Win a copy of Cloud Native PatternsE this week in the Cloud forum
or Natural Language Processing in the AI/ML forum!

Kirk James

+ Follow
since Mar 13, 2019
Cows and Likes
Total received
In last 30 days
Total given
Total received
Received in last 30 days
Total given
Given in last 30 days
Forums and Threads
Scavenger Hunt
expand Ranch Hand Scavenger Hunt
expand Greenhorn Scavenger Hunt

Recent posts by Kirk James

Hi all,

the following code shows a REST endpoint made with SpringMVC module:

The PutMapping annotation doesn't provide a consume or produce element that could be:

Is that option (consume/produce) mandatory for some reasons? Why I should use it or not? If I try to make a call with a client (Postman) I can send a JSON format request even if I don't provide the consume/produce element for this endpoint.
Can you please give me more explainations?

2 days ago

Tim Holloway wrote:You confused me. I call "DML" SQL. Since basically, DDL is the part of "SQL" that isn't actually SQL and SQL is the part that was defined by Codd and Date.

Anyway, as I recall, auto-commit is on by default with MySQL and PostgreSQL and probably Oracle, SQL Server and DB2.

Realistically, saving stuff up to do a commit is something more common to programming than for just fiddling around with the database manually, so I'd expect auto-commit to be the default in general.

But I don't think that it's actually mandated anywhere. For that matter, I don't think that there's a standard that says a DBMS even has to have a command line interface application. Although it's going to be a lot less fun tweaking things without one.

About the DML, I use to define DML as Data Manipulation Language, so everything related to your database manipulation (INSERT, DELETE, UPDATE), then DDL as Data Definition Language, everything related to the definition of your data architecture (ALTER, DROP). When you say that:

DDL is the part of "SQL" that isn't actually SQL

sorry but,  I got confused.

Thank you for your reply.

Tim Holloway wrote:Actually, while it might vary depending on which DBMS you're using, I've always seen DDL auto-commit. And actually, having to explicitly commit DDL sounds a bit problematic to me.

What about DML? no auto-commit by default? Never worked on the DBMS settings, probably that behaviour can be managed.

as far as I know DDL commands such as an alter/drop table are by default setted to autocommit on a RDBMS; on the other hand DML need an explicit commit operation.

Can someone confirm that assumption or explain me if some RDBMS have a different behaviuor?


Paul Clapham wrote:Technical? I don't understand what you're asking.

I was just talking about the Spring IOC container that manages the bean instances and that cannot work with static elements.
2 months ago

Paul Clapham wrote:

Kirk James wrote:Injecting static fields looks like a wrong usage of an OOP pattern (dependency injection), also static fields belong to a Class, not to an Object;

Those are both good reasons not to inject static fields, so I think you have the answer well-covered.

On a techical side, what could be the reason of that behaviour?
2 months ago
Hi all,

I was thinking about why Spring doesn't allow you to inject static fields. Injecting static fields looks like a wrong usage of an OOP pattern (dependency injection), also static fields belong to a Class, not to an Object;

Can someone explain technically why Sping doesn't allow (or is not able) to autowire static fields?

2 months ago
Hi all,

I'm actually focusing on how web services in a microservices architecture should communicate with each other in a proper way.

Thinking about a java RESTful web application (ie Spring-boot/Spring-cloud components), can you please suggest a good way to let a microservice communicate with another one? Is it a bad behaviour if a microservice call another one microservice acting like an external client?

2 months ago

Rob Spoor wrote:Consider you have a microservice that you use. There is one instance deployed, and you talk to that directly. After a while, your demand increases and a single instance just can't cut it, so you add a second instance. However, now you must change your application to support both instances, or put a load balancer (Apache, Nginx, etc) in front of it. You can do that, but once you need a third instance you need to go through this process again. (If you use a load balancer, you must update its configuration).

With service discovery, each instance registers itself. Your application is a client of the discovery service. When it needs to access the microservice, it asks the discovery server for an instance to talk to. You don't need to configure all the separate instances, only the discovery service. If a new instance is added, it registers itself again, and the discovery service will automatically make it available for you.

Thank you for your reply.

Going in deeper, what's the role covered by a load balancer and Zuul proxy server in this kind of workflow/structure  (that basically is the kind of architecture i'm trying to understand more)?
4 months ago
Hi all, I'm approaching to the study of Spring Eureka Server relatively to a microservices architecture.

Basically, as far I understood, the main purpose of Eureka is to discover each registered microservice's address that meanwhile pings the Eureka server to notify it that is available.

So, I was reading some articles about it, trying to understand why we need a service discovery in our architecture, and I faced some doubts about the service scaling process:

Nowadays, on a cloud platform, it is obvious that all the servers or containers use dynamic IPs for autoscaling. And the interesting thing is that in microservice architecture, the key principle is that your service can autoscaled as per load, so cloud platforms are ideal for microservices. (...)
We can not predict the IP addresses of the container/server beforehand, so putting dependent services IP addresses in the config file is not a solution. We need a more sophisticated technique to identify the service, and Eureka server steps in here.

Can someone explain me these last two sentences about the autoscaling process and its relationship with a cloud platform?


4 months ago
Hi all, I've a question about the database schemas, hope it's the right place.

What raised at his time the needs to have schemas for our databases? Basically, how can be defined a database schema and how we can make "speak" each other different schemas in our database?

"Bonus" question: In a microservices architecture application, should each microservice have his own schema and should communicate with other schemas without speaking to other microservices? What could be a good approach if from the microservice A I want to retrieve information from a schema referred by the microservice B (example, Address microservice speaking with the User microservice).

Thanks for your help and sorry for any mistake.