Home Technology Radar trends to watch: October 2021 – O’Reilly

Radar trends to watch: October 2021 – O’Reilly

0


The unwilling star of this month’s trends is clearly Facebook. Between reports that they knew about the damage that their applications were causing long before that damage hit the news, their continued denials and apologies, and their attempts to block researchers from studying the consequences of their products, they’ve been in the news almost every day. Perhaps the most interesting item, though, is the introduction of Ray-Ban Stories, a pair of sunglasses with a camera built in. We’ve been talking about virtual and augmented reality for years; when will it enter the mainstream? Will Stories be enough to make it cool, or will it have the same fate as Google Glass?

AI

  • Researchers at Samsung and Harvard are proposing to copy the neuronal interconnections of parts of the brain, and “paste” them onto a semiconductor array, creating an integrated circuit that directly models the brain’s interconnections.
  • Using AI to understand “lost” languages, written languages that we don’t know how to translate, isn’t just about NLP; it sometimes requires deciphering damaged texts (such as eroded stone tablets) where humans can no longer recognize the written characters.
  • Inaccurate face recognition is preventing people from getting necessary government aid, and there are few (if any) systems for remediation.
  • DeepMind has been studying techniques for making the output of language generation models like GPT-3 less toxic, and found that there are no good solutions.
  • Apple is working on iPhone features to detect depression, cognitive decline, and autism.  A phone that plays psychiatrist is almost certainly a bad idea. How intrusive do you want your phone to be?
  • Reservoir computing is a neural network technique that has been used to solve computationally difficult problems in dynamic systems. It is very resource intensive, but recent work has led to speedups by factors of up to a million. It may be the next step forward in AI.
  • Can AI be used to forecast (and even plan) the future of scientific research?  Not yet, but one group is working on analyzing the past 10 years of research for NASA’s Decadal Survey.
  • There have been many articles about using AI to read X-Rays. This one covers an experiment that uses training data from multiple sources to reduce one of the problems plaguing this technology: different X-ray machines, different calibration, different staff. It also places a human radiologist in the loop; the AI is only used to detect areas of possible abnormality.
  • It isn’t a surprise, but undergraduates who are studying data science receive little training in ethics, including issues like privacy and systemic bias.
  • Stanford’s Institute for Human-Centered Artificial Intelligence is creating a group to study the impact of “foundational” models on issues like bias and fairness. Foundational models are very large models like GPT-3 on which other models are built. Problems with foundational models are easily inherited by models on top of them.
  • Can machine learning learn to unlearn?  That may be required by laws like GDPR and the European “right to be forgotten.” Can a model be trained to eliminate the influence of some of its training data, without being retrained from the beginning?
  • Deep Mind’s technology for up-scaling image resolution looks really good. It produces excellent high-resolution images from pixelated originals, works on natural scenes as well as portraits, and they appear to have used a good number of Black people as models.
  • Amazon has announced details about Astro, its home robot. But questions remain: is this a toy? A data collection ploy? I don’t know that we need something that follows you around playing podcasts. It integrates with Amazon products like Ring and Alexa Guard.

Security

  • Is self-healing cybersecurity possible by killing affected containers and starting new ones? That’s an interesting partial solution to cloud security, though it only comes into play after an attack has succeeded.
  • With three months to go in 2021, we’ve already seen a record number of zero-day exploits. Is this a crisis? Or is it good news, because bad actors are discovered more effectively? One thing is clear: discovering new 0days is becoming more difficult, making them more valuable.
  • The FBI had the decryption key for the Kaseya ransomware attack, but delayed sharing it with victims for three weeks. The FBI claims it withheld the key because it was planning a counterattack against the REvil group, which disappeared before the attack was executed.
  • Privacy for the masses? iOS 15 has a beta “private relay” feature that appears to be something like TOR. And Nahoft, an application for use in Iran, encodes private messages as sequences of innocuous words that can get by automated censors.
  • HIPv2 is an alternative to TLS that is designed for implementing zero-trust security for embedded devices.
  • Kubescape is an open source tool to test whether Kubernetes has been deployed securely.  The tests are based on the NSA’s guidance for hardening Kubernetes.
  • Rootkits are hardly new, but now they’re being used to attack containers. Their goal is usually to mine bitcoin, and to hide that mining from monitoring tools. Tracee is a new tool, built with eBPF, that may help to detect successful attacks.

User Interfaces

  • Kids these days don’t understand files and directories. Seriously, Google’s dominance in everyday life means that users expect to find things through search. But search is often inadequate. It will be important for software designers to think through these issues.
  • Holograms you can touch? Aerohaptics uses jets of air to create the feeling of “touch” when interacting with a hologram. Another step towards the Star Trek Holodeck.
  • Fraunhofer has developed a system for detecting whether a driver is tired or asleep.  Software like this will be particularly important for semi-automated driving systems, which require support from a human driver.

Programming

  • What is property based testing, anyway? Fuzzing? Unit tests at scale? Greater testing discipline will be required if we expect AI systems to generate code. Can property-based testing get there?
  • Google Cloud has introduced Supply Chain Twin, a “digital twin” service for supply chain management.
  • Open VSCodeServer is an open source project that allows VSCode to run on a remote machine and be accessed through a web browser.
  • Ent is an open source object-relational mapping tool for Go that uses graph concepts to model the database schema. Facebook has contributed Ent to the CNCF.
  • Glean is an open source search engine for source code.  Looks like it’s a LOT better than grepping through your src directories.
  • Urbit looks like it could be an interesting operating system for decentralized peer-to-peer applications.

Law

  • Facebook on regulation: Please require competitors to do the things we do. And don’t look at targeted advertising.
  • NFTs, generative art, and open source: do we need a new kind of license to protect artistic works that are generated by software?
  • China issues a Request for Comments on their proposed social media regulations. Google Translate’s translation isn’t bad, and CNBC has a good summary. Users must be notified about the use of algorithmic recommendations; users must be able to disable recommendations; and algorithmic recommendations must not be designed to create addictive behavior.
  • South Korea has passed a law that will force Apple and Google to open their devices to other app stores.
  • Research by Google shows that, worldwide, Government-ordered Internet shutdowns have become much more common in the past year. These shutdowns are usually to suppress dissent. India has shut down Internet access more than any other country.

Biology

  • George Church’s startup Colossal has received venture funding for developing “cold tolerant Asian elephants” (as Church puts it), a project more commonly known as de-extincting Wooly Mammoths.
  • Researchers at NYU have created artificial cell-like objects that can ingest, process, and expel objects. These aren’t artificial cells, but represent a step towards creating them.

Hardware

  • A breakthrough in building phase change memory that consumes little power may make phase change memory practical, allowing tighter integration between processors and storage.
  • Mainframes aren’t dead. The Telum is IBM’s new processor for its System Z mainframes. 7nm technology, 5 GHz base clock speed, 8 cores, 16 threads per core; it’s a very impressive chip.
  • One of Google’s X companies has deployed a 20 Gbps Internet trunk using lasers. The connection crosses the Congo River, a path that is difficult because of the river’s depth and speed.  This technology could be used in other places where running fiber is difficult.
  • Facebook and Ray-Ban have released smart glasses (branded as Ray-Ban Stories), which are eyeglasses with a built-in camera and speakers. This is not AR (there is no projector), but a step on the way. Xiaomi also appears to be working on smart glasses, and Linux is getting into the act with a work-oriented headset called Simula One.

Quantum Computing

  • IBM introduces Qiskit Nature, a platform for using quantum computers to experiment with quantum effects in the natural sciences. Because these experiments are about the behavior of quantum systems, they (probably) don’t require the error correction that’s necessary to make quantum computing viable.
  • Want to build your own quantum computer?  IBM has open sourced Qiskit Metal, a design automation tool for superconducting quantum computers.
  • Curiously-named Valleytronics uses electrons’ “valley pseudospin” to store quantum data. It might enable small, room-temperature quantum computers.

Social Media

  • Facebook has put “Instagram for Kids” on hold. While they dispute the evidence that Instagram harms teenagers, public outcry and legislative pressure, along with Facebook’s own evidence that Instagram is particularly damaging to teenage girls, has caused them to delay the release.
  • Twitter is allowing bot accounts to identify themselves as bots.  Labeling isn’t mandatory.
  • Facebook adds junk content to its HTML to prevent researchers from using automated tools to collect posts.


Learn faster. Dig deeper. See farther.



LEAVE A REPLY

Please enter your comment!
Please enter your name here