It is always nice to go to bed, listen to music and dream away. That is until this flipping brain of mine gets a new idea. In this case it is not new IP, but a new setting for a group of people. You see, during lockdown I got hooked on walk video’s. It was a way to see places I had never visited before, it is one way to get around and weirdly enough, these walk videos are cool. You see more than you usually do (especially in London) most of them are actually quite good, a few need tinkering (like music not so loud) but for the most they are a decent experience. Then I thought what if GoPro makes a change, offering a new stage. That got me going, you see, most walks are on a stick, decent but intense for the filming party. So we can set the movie from a shoulder mount, a chest mount, or helmet mount. Yet what is filmed? So what happens if we have something like Google glasses and the left (or right) eye shows what we see in the film. We get all kind of degrees of filming. And if we want to ignore it, we merely close that eye for a moment. I am surprised that GoPro had not considered it, or perhaps they did. Consider that the filmer now has BOTH hands free and can hold something towards the camera, the filming agent can do more and move more freely. Consider that is works with a holder, but there is a need (in many cases) to have both hands available. And perhaps there is a need for both, the need to use one hand for precision and a gooseneck mount to keep both hands free. The interesting part is that there is no setting to get the image on something like Google Glasses and that is a shame, was I the first to think of it? It seems weird with all the city walks out there on YouTube, but there you have it and in that light, I was considering revisiting the IP I had for a next Watchdogs, one with a difference (every Ip creator will tell you that part), but I reckon that is a stage we will visit again soon enough, it involves Google Glasses and another setting that I will revisit. Just like the stage of combining deeper machine learning to a lens (or google glasses), a camera lens that offer direct translations, and the fun part is we can select if that is pushed through to film, or merely seen by us, now consider filming in Japan with machine learning and deeper machine learning auto translating ANY sign it sees. Languages that we do not know will no longer stop us, it will tell the filmmaker where they are and consider linking that to one lens in google glasses that overlays the map? It that out yet? I never saw it and there are all kinds of needs for that part. What you see is what you know, if you know the language. Just a thought at 01:17. I need a hobby, I really do!