I was on my weekly bike ride through downtown as I noticed the new signs warning that the autonomous vehicle testing will be taking place in downtown Austin, TX. Was it a coincidence that I was able to ride through the 3rd street corridor in the bike lane in one smooth reasonably timed swoop? Or… will the city be taking this as an opportunity to use the bond funding we passed to place some well timed additional effort into making the light timing and traffic flows predictable and free flowing? Getting things as predictable as possible would help reduce the problem set a bit for this initial testing as well as making life easier for the humans in this system.
By noise – they tend to refer to the grainy result of a low light digital photo, a side benefit being that they can also easily remove textual noise. Currently, the result is “softer” than the original clean image, but I’m curious whether it will end up causing issues with watermarking or other copy protection schemes. At what point will “good enough” be sufficient for a derivative use when we deal in low resolution imagery on the web all the time?
Many of us in collections rely on the use of watermarks to make openly sharing our collections more palatable to our donors. Already, we have to warn them that there is no low barrier way to really prevent unattributed image reuse… This is simply going to make that conversation even more difficult.
Just saw this over at the New York Times blog. I’ve had my issues with memory myself so things like this spark my interest pretty quickly. It seems that all this multitasking we do may be screwing with our precious short term memory… uhoh.