Win a copy of Beginning Java 17 Fundamentals: Object-Oriented Programming in Java 17 this week in the Java in General forum!

Stepankha Yuliannia

Greenhorn
+ Follow
since Mar 24, 2021
Stepankha likes ...
Java ME Quarkus Java
Stepankha Yuliannia, PhD.
Germany
Cows and Likes
Cows
Total received
0
In last 30 days
0
Total given
0
Likes
Total received
5
Received in last 30 days
0
Total given
1
Given in last 30 days
0
Forums and Threads

Recent posts by Stepankha Yuliannia

Hi Alex,

Eric Brewer's CAP Theorem states that for the three properties that you want in a distributed system - consistency, availability, and partition tolerance - it is only ever possible to simultaneously achieve two of them. (we typically relax one constraint, for example consistency, in real-world distributed systems). Within the context of functional programming (FP), we use immutability to reason about how a program behaves, and that helps us with things like concurrency so that we can tell that this is the value. Now, when we take that outside of a single process or some memory, and now apply that to how distributed systems (DS) behave, now we're talking about persistent storage, we're talking about messages on the wire etc. Would you then agree that FP is closer to this real-world scenarios and therefore more native for DS?

Many thanks & have a good one
3 months ago

Junilu Lacar wrote:

Alexander Granin wrote:

Stepankha Yuliannia wrote:I'm just going to add that performance is a big thing/reason to use FP (supports parallel programming). OOP does not support parallel programming.

Yes, agree. I'd only say that the term "performance" is quite overloaded, and we might be talking about different things here


Just wanted to get some clarification on this "OOP does not support parallel programming" statement.

Admittedly, I haven't written any Java programs that use them but Java does have features that support parallel programming. Are you both disagreeing with that assertion or is there some nuance in what you said that I'm just missing? Isn't support for parallel programming "a function" (excuse the pun) of implementation rather than the programming paradigm (OOP vs FP) itself? A similar example would be tail recursion. Nothing in OOP or FP says that tail recursion optimization isn't supported, right? Currently, Java doesn't support TRO as far as I know but other JVM languages like Kotlin and Scala do.



Read/write safety is the biggest concern and bottleneck in parallel processing. So what I meant is OOP is not "real" parallel computing in my view. Functional programming basically deals with immutable objects, which means you are quite unlikely to need to worry about some thread changing the content before you get to use it - it also solves it by saying "why re-use and recycle the same memory space? There's so much memory compared to processors, let's just forget about saving space and worry about speed."
3 months ago
I'm just going to add that performance is a big thing/reason to use FP (supports parallel programming). OOP does not support parallel programming.
3 months ago
Thank you Alexander, great answer!
3 months ago
Hi Alex,

Embedded domain-specific languages are everywhere in functional programming, we can say FP and eDSLs/DSLs are natural partners. We use eDSL/DSL to reduce complexity of the system and to keep things decoupled from each other.

When we decide to be using eDSL/DSL so the client code will only work with the DSL (its dependency on native calls will be eliminated), we then have an option to use a native translator for the eDSLs/DSLs that converts high-level commands to native functions or we can test our scenario separately by inventing some testing interpreter. What is your personal preference and why?

Thanks
3 months ago
Hi Alex,

What is your criteria for whether you should we have a message broker in the middle of your solution? Like RabbitMQ, ZeroMQ, Apache Kafka etc.? I'm especially interested from functional programming point of view.

P.S. I understand your book is focused on an application architecture, but would this be covered too?
3 months ago
Hi Alexander,

Looking at the limited preview, I already like the book :-) It seems to cover functional architectural design patterns and explores the functional design by looking at object-oriented principles and learn how they can be reapplied to a functional environment - which is exactly for me.

My question to you is what are your views on when do we need a service-oriented approach (and therefore a service-oriented architecture) and when is better to avoid it?

Thanks

3 months ago
Hi Alex, warm welcome to you!
3 months ago
Congratulation to all the winners.
9 months ago
I wouldn't say "never makes sense to use cloud PaaS". Let me put it this way - if you are not ABSOLUTELY required to use specific PaaS from one of the cloud providers (due to various reasons like compliance, monitoring tools lock-in, data lock-in, networking lock-in etc.), it is often times better to use multiple providers:
1) Because no single cloud service provider has the best tools for everything - by using multiple cloud service providers or an abstraction layer above these providers, you can cherry-pick the best services from each.
2) Depending on one provider for any product or service can be risky. Not only might they suffer an outage, but their service levels could decline or their prices could go up (theoretical, but possible). By not putting all your eggs in one basket, you are minimizing the risk of your own business suffering in the future.
3) Another benefit of that could be, if you are a high-volume customer (e.g. a million $$$ or more per year), you may be in a position to negotiate lower prices.
9 months ago
From the CI/CD perspective, I don't think it makes much difference whether it is legacy or brand new (if by legacy, you mean systems that do not have automated test as it is provocatively defined in Michael Feather's book "Working Effectively with Legacy Code") codebase. The same principles still apply - you need to achieve certain level of maturity in various levels of your software delivery as I explained in the other thread here: https://coderanch.com/t/741354/engineering/Pipeline-Code-Practices-CI-CD.

The first priority when dealing with such a system is usually to create an automated build process if one does not exist. Then maybe the next priority would be to create an automated functional functional test scuffolding around it. Creating automated tests will be easier if documentation or the team members are still available (might not be the case). This activity could be hard as from the business perspective, you're spending time on what seems like "low-value activity" - if the legacy system is already in production and working "fine". Once these smoke tests are in place, you can begin with layered approach to your automated tests. The first layer being very simple and fast-running tests for problems that prevent you from doing useful testing and development on whatever functionality you are working on. The second layer tests the critical functionality for a particular feature. This can sometimes be harder that it sounds. Systems designed to be testable tend to be more modular and easier to test thant those that are not. However, this should not divert you from the goal. It is important to remember, that you should only write automated tests where they will deliver value - the vast majority of regression bugs are caused by altering framework code, so if you are only adding features that do not require changes to the underlying framweork, there is little value writing a comprehensive scuffolding (however, the exception to this is when your sofware has to run in a number of different environments - in this case automated tests combined with automated deployment to production-like environments deliver a gread deal of value since you can simply point your scripts at the environments to be tested and save yourself a lot of effort on manualt testing).

Good luck!
9 months ago
When your organization is already "locked-in" to the specific provider's container PaaS and has all data and other supporting infrastructure on that provider, it makes sense to keep using that cloud PaaS. However, when you are talking about greenfield project and starting from scratch, it would probably be a smart decision to use some abstraction layer like OpenShift/Rancher/CloudFoundry which will make it more future proof and will allow you to move more freely between various platforms.

I found this article which explains it really well: https://www.overops.com/blog/pivotal-cloud-foundry-vs-kubernetes-choosing-the-right-cloud-native-application-deployment-platform/
9 months ago
Hi,

Some people are opinionated about this - but I would like to hear your view on when to use Jenkins Scripted Pipeline and when it is better to use Jenkins Declarative Pipeline.

Kind regards
9 months ago
The author may have a different opinion on this, but I would still use CI if more than 2 people are involved in touching the same codebase and potentially "stepping on each other's toes". Even if you implement simple workflow like "You push your code to GitHub" -> "GitHub triggers CI to test & build" -> "Your build passed!", it immediately gives you more efficiency without much expertise. For this simple scenario, I would probably use something like Travis CI if you don't want to manage another tool in your arsenal.
9 months ago
When comparing CNIs, you are essentially comparing the speed/throughtput and when comparing CRIs you are concerned about security differences.
Look at these two links:
Comparing CNIs
Comparing CRIs

My personal preference is Flannel + CRI-O
9 months ago