This week's book giveaway is in the Java in General forum.
We're giving away four copies of Event Streams in Action and have Alexander Dean & Valentin Crettaz on-line!
See this thread for details.
Win a copy of Event Streams in Action this week in the Java in General forum!
  • Post Reply Bookmark Topic Watch Topic
  • New Topic
programming forums Java Mobile Certification Databases Caching Books Engineering Micro Controllers OS Languages Paradigms IDEs Build Tools Frameworks Application Servers Open Source This Site Careers Other all forums
this forum made possible by our volunteer staff, including ...
Marshals:
  • Campbell Ritchie
  • Devaka Cooray
  • Liutauras Vilda
  • Jeanne Boyarsky
  • Bear Bibeault
Sheriffs:
  • Paul Clapham
  • Knute Snortum
  • Rob Spoor
Saloon Keepers:
  • Tim Moores
  • Ron McLeod
  • Piet Souris
  • Stephan van Hulst
  • Carey Brown
Bartenders:
  • Tim Holloway
  • Frits Walraven
  • Ganesh Patekar

Dependency Inversion Principle, JPA and Testing Issue

 
Ranch Hand
Posts: 145
2
Mac Java
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
I'm not sure which forum is best to post this in as it covers a couple of topics. If it needs moving please let me know where would be most appropriate for this broad ranging topic and I'll work out how I should go about moving it.

I have a class Transaction, which can have many Sales and is reference by a field List<Sale> sales. I also have a SaleResource class and will eventually have a TransactionResource class too. One of the methods I want to implement is a getSales() method. A transaction can exist without a sale (it could have a purchase, but no corresponding sale). My understanding of the Dependency Inversion principle is that the sale depends on transaction, and not the other way round. Am I right in thinking my method should be placed in the SaleResource class and accept a Transaction parameter?

Is this correct or is it more correct to put it in the TransactionResource class?

This then leads me to my next question, which is JPA related. Can I just call transaction.getSales() or this.getSales() (the relationships are set up correctly in the entities, as is the getter, to allow this) and expect the relevant result list to be returned? Or do I need to first find the relevant entity (entityManager.generatenamedQuery(...) etc), and then call transaction.getSales()? Because at this point, as I understand it, the entity might not be registered with the persistence context so it might not have List<Sale> sales referencing a list of Sale entities. Correct? How would that change with lazy loading?

Then finally, how to test this in my TestSaleResource class? Whenever I write a test, I get a nullPointer Exception. I am using JUnit with Mockito to mock the external dependencies:

I suspect this is because my implementation:

Clearly the test will fail based on the assertEquals parameters, since there is currently no way transaction will return a list of 2 sales. I have tried mocking various objects to make this work but I still have no joy, which is why I ask about JPA above. How should I be designing my test, what should and shouldn't be mocked? Also, is this method testable and if not, how would I go about making it so?

Again, please let me know if this needs moving to another forum and I will give it a go. Thanks.
 
Rancher
Posts: 4177
47
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
I'm not sure I understand the relationship between these entities.

What exactly are the XXXResource classes? Are they simply DAOs?

If you have a mapped relationship between Transaction and Sales, then you don't need a getSales method if you already have the Transaction as the Transaction should have the Sales already.
 
Ranch Hand
Posts: 122
1
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
Dear Dave,


For testing JPA you can try another approach and that is to create all your entities using a memory data base and test them with JUnit. In this case you do not need any mocking library.

You can ask the moderator to move this thread to the forum http://www.coderanch.com/forums/f-78/ORM .

You can have a look on this thread: http://www.coderanch.com/t/635574/ORM/databases/unit-test-JPA-class .


Bye.

 
Dave Tolls
Rancher
Posts: 4177
47
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
Um...I'm not the OP.
 
Sheriff
Posts: 13554
223
Mac Android IntelliJ IDE Eclipse IDE Spring Debian Java Ubuntu Linux
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator

Ashley Bye wrote:I have a class Transaction, which can have many Sales and is reference by a field List<Sale> sales. ... My understanding of the Dependency Inversion principle is that the sale depends on transaction, and not the other way round.


It's neither. DIP is about decoupling and insulating dependency relationships from the ripple effects of change. As you describe it, your design has Transaction depending on Sales because it references Sales. If you consider Transaction as a high-level module and Sales as a low-level module, then DIP says you should introduce an abstraction layer in between Transaction and Sales so that both Sales and Transaction will depend on the abstraction layer instead.
 
Ashley Bye
Ranch Hand
Posts: 145
2
Mac Java
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator

Junilu Lacar wrote: If you consider Transaction as a high-level module and Sales as a low-level module, then DIP says you should introduce an abstraction layer in between Transaction and Sales so that both Sales and Transaction will depend on the abstraction layer instead.



Ah, got it. So would a correct example of DIP be as follows?


 
Junilu Lacar
Sheriff
Posts: 13554
223
Mac Android IntelliJ IDE Eclipse IDE Spring Debian Java Ubuntu Linux
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator

Ashley Bye wrote:
Ah, got it. So would a correct example of DIP be as follows?


Almost.

Before DIP: Transaction --> JpaTransactionRepository

After DIP: Transaction --> TransactionRepository <-- JpaTransactionRepository

In code, that would be:



It doesn't really matter what package they would be in. However, I have started putting my high levels in a "core" module, separate from the implementations. That is, I would put Transaction and TransactionRepository (the interface) in myproject-core.jar, for example, and JpaTransactionRepository in myproject-db-jpa.jar along with the rest of the Jpa implementations of other Repository interfaces.

A caveat here is to avoid "interface explosion" and apply DIP judiciously. My rule of thumb is to introduce the interfaces at the "edges" of the hexagonal architecture, which is how most of my designs these days end up.
 
Junilu Lacar
Sheriff
Posts: 13554
223
Mac Android IntelliJ IDE Eclipse IDE Spring Debian Java Ubuntu Linux
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
BTW, the logical conclusion to applying DIP is to use DI (Dependency Injection).

In code,

Without DI, you'd have to use a Factory of some sort to get a reference to a TransactionRepository implementation.
 
Junilu Lacar
Sheriff
Posts: 13554
223
Mac Android IntelliJ IDE Eclipse IDE Spring Debian Java Ubuntu Linux
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
I moved this to the Testing forum since the real question is about Testing.

Ashley Bye wrote:I have tried mocking various objects to make this work but I still have no joy, which is why I ask about JPA above. How should I be designing my test, what should and shouldn't be mocked? Also, is this method testable and if not, how would I go about making it so?


Let's nitpick this a little:

In the test name 'testGetSalesForTransactionWhenSalesPresentReturnsListSale', the 'test' prefix is redundant. You already have the @Test annotation and if you're using that, it means that you don't need your test methods to start with 'test'. Get rid of the prefix.

I find the camelcase a little hard to read, especially when the name gets long, so I do this instead: getSales_for_transaction_returns_list_of_sales

The assertion on line 18 states that you expect the list of sales to have two items in it. The 2 in the assertion comes out of nowhere. Nothing in the rest of the test sets that up and so the 2 becomes a magic number. If you had set up your EntityManager mock to return exactly two Sales objects, then it would make more sense.

Actually, the story I would make the test tell is this:

By adding a 'mock' prefix to all the mock objects, you make the class under test, saleResource stand out. That makes it easier to discern that the test is about the saleResource object. This kind of test is a 'white box' test because it knows intimate details of how the getSalesFor(Transaction) method works. It's not a bad test to have but you need to recognize that it's more brittle. If you change the implementation of the getSalesFor() method, this test is more likely to fail in some way or become obsolete.

Note also that I added a notNull assertion before checking the size of the returned list. When I see myself doing this, I ask "Do I need two separate tests?" to which I would most likely answer "Yes." So, I'd refactor to something like this:

The first test is high-level design. It says that the getSalesFor method always returns a List<Sale>. The related design principle here is to never return null when the return type is a List. You return an empty list if there are no results.

The second test is low-level design. It's a whitebox test that relies on the implementation of getSalesFor using a Transaction object to get a transactionId that is then passed to the EntityManager.findByTransactionID method. This test is more brittle because it relies on Transaction to provide an id, presumably via a getTransactionId() method, and on the EntityManager having a findByTransactionId method that returns a resultSet of some kind.

You may have noticed that I hide a lot of details in builders. I find that I end up refactoring a lot of my test setup code to helper methods or builders. Here, I combine the two: the helper method returns a builder. This helps me gather all the setup logic in one place. Then I'll use constants like ID_WITH_SALES and WITH_SALES to clarify intent and eliminate duplication.

Caveat: the test code I give above is just off the top of my head. I'd probably refactor that a little but more, especially the name WITH_SALES, but that's the general idea of the approach I would take.
 
Junilu Lacar
Sheriff
Posts: 13554
223
Mac Android IntelliJ IDE Eclipse IDE Spring Debian Java Ubuntu Linux
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
This example shows you why you need to be careful not to go too crazy with mocks. When your mock object setup becomes too hairy even with helper methods and builders, then you have to stop and realize that your code/design is getting too hard to test. When this happens to me, I go back and start refactoring for simplicity, looking for dependencies that I can break or push further down.
 
Ashley Bye
Ranch Hand
Posts: 145
2
Mac Java
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator

Junilu Lacar wrote:In code, that would be:


Is there any reason why the repo is injected into the Transaction? Rethinking my architecture since my original post I have an application layer that has a DefaultTransactionService, which implements a TransactionService interface. The service ties together the various elements of the domain that represent different aspects of a transaction and the interface represents the API for the user interface. As such, I have the repo injected into the service implementation. What are the advantages and disadvantages between the two approaches?

Junilu Lacar wrote:My rule of thumb is to introduce the interfaces at the "edges" of the hexagonal architecture, which is how most of my designs these days end up.


I've not heard of this architecture but will add it to my every growing list of topics to research.

Junilu Lacar wrote:This kind of test is a 'white box' test because it knows intimate details of how the getSalesFor(Transaction) method works. It's not a bad test to have but you need to recognize that it's more brittle. If you change the implementation of the getSalesFor() method, this test is more likely to fail in some way or become obsolete.


This leads me to think that getSalesFor(Transaction) is not a well designed method. Would it be better to have the method in the TransactionService, so that I am instead sending a message to the Transaction instance referenced like so: transaction.getSales()?


 
Junilu Lacar
Sheriff
Posts: 13554
223
Mac Android IntelliJ IDE Eclipse IDE Spring Debian Java Ubuntu Linux
  • Likes 1
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator

Ashley Bye wrote:This leads me to think that getSalesFor(Transaction) is not a well designed method. Would it be better to have the method in the TransactionService, so that I am instead sending a message to the Transaction instance referenced like so: transaction.getSales()?


I don't know if I could jump to that conclusion. I always say "listen to what the code is saying and what it wants to do." I know that may sound a bit cryptic and Zen-like; I'm still trying to find a good way to explain that in more concrete terms but that's exactly what I do. I read the code, often out loud, and I listen for awkward or non-sensical sentences. When I read the code you have and the code that I suggested, I don't hear anything that sets off alarm bells. It all kind of makes sense still. I would leave it alone for now and move on. As you code more usages of these classes, you may end up writing code that does sound non-sensical. Depending on what kind of refactoring you'd need to do so that the code makes sense again, moving the method to the TransactionService might be the right thing to do at that point.

This is one of the advantages of doing Test-Driven Development. As you add more tests, you create new scenarios in which the objects and its collaborators are used. The more scenarios you put the objects in, the more you exercise them, the more you find out about the API design, both good and bad. It's nice when you write a test and it fits in well with the API. It's even better when you find it difficult to write a test because now you've learned that there's something that can be improved. The tests drive the design and discovery of the API. You can design predictively but being successful at that requires a lot of experience, skill, and a good measure of luck. IMO, TDD tips the balance towards luck because the more you exercise the API with tests, the more likely you are to discover flaws.

Edit: I was just re-reading Kent Beck's "Test-Driven Development By Example" book and I think this is related to what I called "luck":

Kent Beck wrote:The ability to create so much business value so quickly was no accident, however. Several factors came into play.
· Method-- Ward and the WyCash team needed to have constant experience growing the design of the system, little by little, so the mechanics of the transformation were well practiced.
· Motive-- Ward and his team needed to understand clearly the business importance of making WyCash multi-currency, and to have the courage to start such a seemingly impossible task.
· Opportunity-- The combination of comprehensive, confidence-generating tests; a well-factored program; and a programming language that made it possible to isolate design decisions meant that there were few sources of error, and those errors were easy to identify.


In retrospect, I think Beck is right about Opportunity. Well-factored and well-tested code gives you more opportunities for morphing your designs and adapting to new requirements. I guess my idea about TDD making you more "lucky" stems from the word "chance" being a synonym for "opportunity" and chance is associated with luck.
 
Junilu Lacar
Sheriff
Posts: 13554
223
Mac Android IntelliJ IDE Eclipse IDE Spring Debian Java Ubuntu Linux
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator

Ashley Bye wrote:
Is there any reason why the repo is injected into the Transaction?


Dependency Injection results in very loosely coupled classes. Transaction depends on the TransactionRepository interface. As I said before, if we didn't inject the repo, then we'd either have to instantiate the repo in Transaction, which defeats the purpose of programming to an interface, or we could use a more active way of getting a reference, such as calling a Factory method or using a Locator. Doing that would couple you to the Factory or the Locator. With DI, Transaction becomes more passive. It doesn't care how the TransactionRepository is created or obtained. It just assumes that something will give it a valid TransactionRepository to use before it is asked to do anything related to TransactionRepository.
 
  • Post Reply Bookmark Topic Watch Topic
  • New Topic
Boost this thread!