An anonymous reader quotes a report from TechCrunch: When it launched late last year, the new MacBook Pro’s Touch Bar was largely reliant on first-party applications to show off what it could do. Since then, a number of other companies have jumped on board, helping the secondary screen grow into something more than novelty. Of course, as with any new technology, there’s going to be some unanticipated downside. Test taking software company Examsoft, for one, believes the input device could help facilitate cheating among students taking the bar exam. What’s perhaps most interesting here, is that the company’s calling out one of Touch Bar’s more mundane features: predictive text. “By default,” the company writes, “the Touch Bar will show predictive text depending on what the student is typing, compromising exam integrity.” It’s hard to say precisely how the company expects a standard feature on mobile devices to help students pass one of the more notoriously exam out there, but The Next Web notes that some states have already taken action. North Carolina, for one, has required test takers with the new model MacBooks to disable the Touch Bar, while New York is banning the machines altogether.
In October 2015, two gamers who used face-scanning tech found in 2K Games’ NBA series to create more realistic avatars filed a lawsuit against the company as they were concerned about how 2K would store and use their biometric data. On Monday, however, a New York federal judge ruled that neither games’ biometric face scanning tech had established ‘sufficient injury’ to the plaintiffs, implying that their concerns over privacy were unfounded. Engadget reports: Using your console’s camera, the company employs face-scanning tech in its popular NBA series, with both 2K’s NBA 2K16 and 2K15 using the data to help players create more accurate avatars. In order to use the tech, players must first agree to 2K’s terms and conditions, consenting that after scanning them their face may be made visible to others. While the plaintiffs agreed to the publisher’s terms, the court case arose because the gamers claimed that 2K never made clear made clear that scans would be stored indefinitely and biometric data could be shared. With little evidence to suggest how their privacy would be at risk, the judge gave 2K the benefit of the doubt. Still, no matter the outcome, it’s a landmark case, with biometric data sure to play an increasingly important role in identifying individuals in the future. While there is certainly nothing that suggests that 2K will use the data for nefarious means, the result of this case does raise some interesting questions about who owns the right to your digital likeness.
An anonymous reader quotes a report from The Register: Source-code hub Gitlab.com is in meltdown after experiencing data loss as a result of what it has suddenly discovered are ineffectual backups. On Tuesday evening, Pacific Time, the startup issued the sobering series of tweets, starting with “We are performing emergency database maintenance, GitLab.com will be taken offline” and ending with “We accidentally deleted production data and might have to restore from backup. Google Doc with live notes [link].” Behind the scenes, a tired sysadmin, working late at night in the Netherlands, had accidentally deleted a directory on the wrong server during a frustrating database replication process: he wiped a folder containing 300GB of live production data that was due to be replicated. Just 4.5GB remained by the time he canceled the rm -rf command. The last potentially viable backup was taken six hours beforehand. That Google Doc mentioned in the last tweet notes: “This incident affected the database (including issues and merge requests) but not the git repos (repositories and wikis).” So some solace there for users because not all is lost. But the document concludes with the following: “So in other words, out of 5 backup/replication techniques deployed none are working reliably or set up in the first place.” At the time of writing, GitLab says it has no estimated restore time but is working to restore from a staging server that may be “without webhooks” but is “the only available snapshot.” That source is six hours old, so there will be some data loss.
NASA’s Cassini probe has captured news images of Saturn’s rings in unprecedented detail. The images were captured by the probe in its penultimate mission phase of its mission that includes “20 orbits that dive past the outer edge of the main ring system” before the spacecraft plunges into the planet itself. Interestingly, the rings include what NASA calls “moonlets” embedded in them. VOA News reports: The images are the closest ever taken of Saturn’s rings and, according to NASA âoeresolve details as small as 550 meters, which is on the scale of Earth’s tallest buildings.â The”ring-grazing” orbits began last November and will continue until the end of April, and in addition to spotting the moonlets, they have given greater clarity to other structures within the rings such as the so-called propeller-like formations. NASA added that Cassini has also provided the “closest-ever” glimpses of two small moons, Daphnis and Pandora. The report via NASA Jet Propulsion Laboratory (JPL) adds: “Some of the structures seen in recent Cassini images have not been visible at this level of detail since the spacecraft arrived at Saturn in mid-2004. At that time, fine details like straw and propellers — which are caused by clumping ring particles and small, embedded moonlets, respectively — had never been seen before. (Although propellers were present in Cassini’s arrival images, they were actually discovered in later analysis, the following year.) Cassini came a bit closer to the rings during its arrival at Saturn, but the quality of those arrival images (examples: 1, 2, 3) was not as high as in the new views. Those precious few observations only looked out on the backlit side of the rings, and the team chose short exposure times to minimize smearing due to Cassini’s fast motion as it vaulted over the ring plane. This resulted in images that were scientifically stunning, but somewhat dark and noisy.
TechCrunch is reporting that Facebook is prioritizing “authentic” content in News Feed with a ranking algorithm change that detects and promotes content “that people consider genuine, and not misleading, sensational, or spammy.” The algorithm will also boost stories that are going viral in real-time. From the report: To build the update, Facebook categorized Pages that frequently share inauthentic posts like fake news and clickbaity headlines, or get their posts hidden often. It then used these posts to train an algorithm that detects similar content as its shared in the News Feed. Facebook will now give extra feed visibility to posts that don’t show signs of similarity to inauthentic content. Meanwhile, Facebook wants to more quickly surface big stories going viral either because the topic is being posted about by lots of people, or a Page post about the topic is seeing tons of engagement. Facebook will then take that as a signal that you might temporarily care more about the topic, and therefore show it in your News Feed while it’s still hot. Facebook says it doesn’t anticipate significant changes to most Pages’ News Feed distribution, but some might see a small increase or decrease in referral traffic or outbound clicks depending on if they share authentic, timely content vs inauthentic and outdated stories.