In the last few weeks, I spent almost my spare time playing a bit with Deep Reinforcement learning. I could say that I rarely experienced such a frustrating experience before.
As a software developer / software architect, I can say I'm really used to have to study new technologies and to shift my mental point of view. But RL is a damn evil beast to tame.
First, the impression I got is that the whole field is (still) really brittle, at least if you are not a real expert.
I mean, while developing software, you know you can rely upon some consolidated best practices, some recipes that, more or less, can help you to solve your problem and go a step further.
With RL I think it's not the case. I was playing with Open AI gym, which provides you with some "enviroments" your "agents" can play with.
If you try to solve a problem with a too much complex model, you will fail.
If your model is too simple, it will fail.
If you change some of the hyperparameters in your neural network, you may risk to fail.
What lets me astonished is that RL seems a big puzzle that may be solved with a lot of experimentation, and this seems to be pretty equivalent to state "it's solved mainly with the help of a good dose of luck".
something I believe is irreconcilable with Science, in its general term. And with an huge amount of time, and data.
I have to admit I'm a beginner swimming in a sea of ignorance (mine ignorance, of course), but while I made some progress, I'm really upset and I wonder if someone else here experimented my same experience.
Thanks in advance.
I didn't do it. You can't prove it. Nobody saw me. The sheep are lying! This tiny ad is my witness!
Devious Experiments for a Truly Passive Greenhouse!