Carey Brown wrote:I would prefer to see all the questions first before committing to spending the time answering them.
Brecht Geeraerts wrote:Since I have a PhD myself, I have always been intrigued by academic scientific research. On the site it is stated that the participants need to stay focussed until the end for the data to be valid. Based on what objective criteria will you include/exclude data to limit the bias in your study? i'm just curious...
Liutauras Vilda wrote:Are these related to some extent?
https://coderanch.com/t/714433/code-reviews/engineering/Experiment-Code-Reviews-GitLab (from 6 days ago)
Liutauras Vilda wrote:
https://coderanch.com/t/697113/open-source/Automation-Modern-Code-Review (from 27 July 2018)
And I'm sure I saw more of this kind from even earlier times.
Alberto Bacchelli wrote:
2. Then, we ask you to do the code review, asking you to focus on the bugs (that is, errors that make the code not work in all scenarios) rather than maintainability issues (e.g., readability).
Michael Krimgen wrote:Done! Good luck with your research!
Junilu Lacar wrote:
Interesting. I do exactly the opposite for code reviews I conduct. My thinking is that bugs should be address by unit testing. Team level code reviews are for ensuring everyone who looks at the code comes to the same understanding of its intent. That often doesn't happen when code is not expressive and readable.
In my book, bugs are caused by misunderstanding. If you focus first on bugs without making the code readable and understandable, then you are ignoring the root of the problem.
Junilu Lacar wrote:My thinking is that bugs should be address by unit testing. Team level code reviews are for ensuring everyone who looks at the code comes to the same understanding of its intent.
Liutauras Vilda wrote:So the review feedback shouldn't point out a presence of bugs, but the lack of unit tests for the particular parts of the code which are not tested (hence bugs slipped in).
Brecht Geeraerts wrote:I also made my contribution!
I'd definitely be interested to read the peer-reviewed manuscript once it is published!
Junilu Lacar wrote:
We start out code reviews by running all the tests. That way, we at least know that there are no known bugs.
Then we go through the tests and make sure we all have the same understanding of what the intent is for each one.
If I had asked people what they wanted, they would have said faster horses - Ford. Tiny ad:
Devious Experiments for a Truly Passive Greenhouse!https://www.kickstarter.com/projects/paulwheaton/greenhouse-1