I attended a talk recently where they discussed how the capacity to produce deep faked video would affect our capacity to trust what we’re seeing as evidence in the media and in the court systems. At that discussion, some mentioned that perhaps the textual media would then retain some form of advantage in the credibility game…
Researchers, scared by their own work, hold back “deepfakes for text” AI
Not new by any stretch of the imagination. However, RACI and its alternatives are a great way to talk about the levels involved in ensuring that you keep everybody “in the loop” as you work on projects that affect an organization.
We have tested the Double Robotics telepresence robot in the past, but I’m trying to avoid going down a bit of an open source telepresence rabbit hole today. Documenting some sites and code to explore later…
Double Robotics: Double gives you a physical presence at work or school when you can’t be there in person. (not open-source)
It has come to my attention that the primary use of the EVTradinPost’s systems have come to be for people to post scam listings or to try and defraud sellers. The platform I was using offers no tools to assist in managing that situation and even if it did – I have been clear – this is a side, passion project for me. While this is a painful decision, I must acknowledge that I have no time to spend worrying about what people are posting and I absolutely don’t want to be spending my personal funds to sponsor fraud and scams. As such – I have shut the site down while I decide what to do next with it.
If the site was useful to you in the time it was up – I am glad.
I was on my weekly bike ride through downtown as I noticed the new signs warning that the autonomous vehicle testing will be taking place in downtown Austin, TX. Was it a coincidence that I was able to ride through the 3rd street corridor in the bike lane in one smooth reasonably timed swoop? Or… will the city be taking this as an opportunity to use the bond funding we passed to place some well timed additional effort into making the light timing and traffic flows predictable and free flowing? Getting things as predictable as possible would help reduce the problem set a bit for this initial testing as well as making life easier for the humans in this system.
Spinning projects out of Google-X into their own divisions holds no guarantee that they will succeed, but it is encouraging to see Google placing more behind its internet connectivity projects. You might have seen the news about the ability for these systems to adapt and provide emergency coverage in the wake of the Puerto Rico devastation.
Project Loon delivers internet to 100,000 people in Puerto Rico
FWIW – Drone delivery (Wing)is an interesting spin out too. I would have thought that this use case would have needed more time in the incubator. But, perhaps that’s the strategy, prove the technology and then spin them out as projects quickly to see if they find their use cases.
By noise – they tend to refer to the grainy result of a low light digital photo, a side benefit being that they can also easily remove textual noise. Currently, the result is “softer” than the original clean image, but I’m curious whether it will end up causing issues with watermarking or other copy protection schemes. At what point will “good enough” be sufficient for a derivative use when we deal in low resolution imagery on the web all the time?
Many of us in collections rely on the use of watermarks to make openly sharing our collections more palatable to our donors. Already, we have to warn them that there is no low barrier way to really prevent unattributed image reuse… This is simply going to make that conversation even more difficult.