I shot the show yesterday, but did not get in edited in post… so, this evening fer sure!
Category Archives: Computers, Science & Technology
I NEED a battery that lasts longer. And, I have a HUGE battery in my phone!
Android Authority – By: Jimmy Westenberg – “While it seems as though current lithium-ion batteries in the tech world are slowly improving, it’s becoming increasingly more difficult for companies to build high-end devices that won’t waste precious battery life. A number of research labs and universities are trying to solve this battery problem, but not many have been successful in recent years. One of the latest companies to research heavily into new battery tech is Google, according to a new report by The Wall Street Journal.
The group that is currently working on this new battery tech comes from the Google X research labs and is led by former Apple battery expert Dr. Ramesh Bhardwaj. According to ‘people familiar with the matter’, Google’s team originally began testing other companies’ batteries for use in Google’s own products. Since 2012, the team has shifted its efforts into building battery tech that Google will end up producing itself. The team of Google X lab workers only consists of four members, including Dr. Bhardwaj.
The Wall Street Journal explains:
At Google, Dr. Bhardwaj’s group is trying to advance current lithium-ion technology and the cutting-edge solid-state batteries for consumer devices, such as Glass and Google’s glucose-measuring contact lens, according to the people familiar with the matter.
Whatever Google is working on could progress the state of thin-film batteries to eventually be used in smartphones, wearables and even in devices that could be implanted into the human body.
The report doesn’t comment on the specific technology that Google is working on or when we can expect to see it in the real world. While this whole story is a little scarce on details, we’re happy to hear Google may be putting its resources towards an area that really needs it.”
We are closer to getting Amazon stuff via drone delivery!
Gizmodo – By: Maddie Stone – “Amazon’s much anticipated same-day drone delivery service Prime Air reached another milestone this week: The Federal Aviation Administration has just given Amazon clearance to begin flight-testing the drones in the United States. Again. For real this time.
This is the second time in as many months that the online retail giant has received a drone testing certificate from the FAA. Last time around, however, the certificate only applied to an already-obsolete prototype. Frustrated by the Feds’ inertia, Amazon recently began testing its delivery drones at a “top secret” location in Canada, just 2,000 feet from the US border.
Now, it seems, the company can finally commence their drone tests domestically. Sez the FAA’s director of flight standards service John Duncan, in a letter to Amazon:
This letter is to inform you that we have granted your request for exemption. The exemption would allow the petitioner to operate an unmanned aircraft system (UAS) to conduct outdoor research and development testing for Prime Air.
The letter goes on to outline the FAA’s terms and limitations, stating that Amazon can only conduct test flights up to 400 feet, that drones must not exceed 100 mph, and that they must remain within the ‘line of sight’ of their operator at all times. No big surprises here—these are similar to the rules outlined in last month’s defunct certificate, and to the proposed rules for commercial drones that the FAA drafted in February.
Amazon CEO Jeff Bezos first announced his vision for Prime Air, a drone delivery service that would transport packages from company warehouses to shoppers’ front doors in 30 minutes or less, in 2013. That vision is still hamstrung by the FAA’s recent regulations, particularly the line-of-sight requirement. Currently, Amazon is also prohibited from flying its drones over ‘densely populated areas.’ Still, the recent move should be taken as progress. From Amazon’s perspective, it may be only a small step toward a much larger goal, but at least we seem to be moving in the right direction.”
This article pitches the idea that passwords time has come. I have to admit, it would be nice to have a better token, but what would work well and still be secure?
Open Source – By: Scott Nesbitt – “How many passwords do you have? Probably more than you can easily remember or comfortably manage on your own. And I’m willing to bet that you dread coming up with new ones when you sign up for something online.
Jonathan LeBlanc of PayPal is on a mission is to replace the password with something more secure and easier to use.
He’s not a head-in-the-clouds dreamer or theorist, either. LeBlanc is head of developer advocacy for PayPal and Braintree, and has an abiding interest in security, identity, and social technologies. He’s also the author of Programming Social Applications and helped architect the developer authentication technology used by companies like PayPal and Yahoo.
At POSSCON 2015, LeBlanc will be giving a talked titled Kill All Passwords. I spoke to him to learn more about what’s wrong with the password and what can replace it.
What’s the problem with passwords?
The problem itself isn’t necessarily the password. The problem is that human beings are horrible at creating passwords that have any measure of complexity. If we look at the statistics on leaked passwords in 2014, approximately 5% of all people use password as a password. About 10% of the population uses either password, 123456, or 12345678. If we look at the top 1,000 leaked passwords, those account for 91% of leaked passwords.
We can build systems to perform device fingerprinting, location verification, and identification through usage habit identification, but all of that becomes secondary if password choices are weak.
How did we get to this point with passwords?
We’re human. We are inundated with technology and accounts day in and day out, and most people will choose a password that they can easily remember. That makes sense, but no matter how much we tell people that their password choices are incredibly insecure, people will still continue to use weak passwords and the same password on all of their accounts. We got to this point because we expected people to pick a convoluted series of multi-case characters, numbers, and symbols that means nothing to them in order to secure their accounts.
How did you get involved in this drive to kill the password?
This came naturally with the work that I have been involved in within the security and identity industry. About six years ago, when I was working at Yahoo, I was working with their OAuth 1 (later 1.0a) and OpenID integrations as well as some of the more experimental authentication technology that was used for their social logins and social application environment. This gave me my first real foray into some of the security architecture behind a login and has led to me helping to architect the authentication systems behind the PayPal developer products.
What I realized throughout all of this is that there is a fine line between the security of the systems and the usability of the systems. We had to find a balance where the user was protected as much as possible, but we were also able to give them an easy experience. This drive towards password-less authentication is an evolution of that.
Is there a way to make passwords secure that’s easy for everyone?
Absolutely. On the consumer side, password manager systems like 1Password or LastPass are becoming more prevalent and allow you to only remember one master password. Beyond that, your other accounts can have highly secure passwords that you have no way of remembering, and the system just remembers for you. Both personally and professionally, I use 1Password.
On the system side, we can further bring security to users by employing device and browser fingerprinting, region detection, and identification based on typical usage habits, all without the user having to be impacted the additional levels of security.
What can replace the password?
If we break down the concept of a username and password, they are an identification of who you are (the username) and then a verification of that fact with something that only you should know (the password). Any technology that provides these facilities can do that, so it’s not so much about keeping the username and changing the password, but really just about picturing these systems in a different way.
How will open source technologies play a role in replacing passwords?
On the data security side, to further secure username/password authentication, you have a number of open source key hashing and salting implementations. When used properly, they allow for the secure storage of user information, including those passwords.
Authentication and authorization technologies like OAuth 1.0a, OAuth 2, and OpenID Connect all provide a more secure implementation for logging a user in and allowing applications to do things on their behalf. They do more than secure information like passwords back and forth between an application and the login host.
As we start to explore biometrics, wearables, embeddable, and other technologies, they potentially become another factor in telling a system who you are. They can use multiple authentication factors to turn that into a valid login. Open source hardware, especially microcontrollers and sensors, is being used to build these next generation prototypes.
How secure are those technologies?
It really depends on what you’re trying to secure.
Let’s look at hashing for password security first. General purpose hash algorithms like MD5 and SHA1 are built for speed—to be able to handle as much data as possible in as short a time as possible. The problem with using those in password security is that since an attacker can’t reverse the hash, they might simply launch a brute force attack with different potential inputs until they generate the correct hash. The faster the hashing algorithm, the more viable this attack is.
Algorithms like bcrypt and PBKDF2 use a technique called key stretching. They allow you to determine how expensive (in terms of time and/or size) the hash function will be. We choose to make the decryption slower to prevent these potential attacks, but still make it fast enough to not impact a valid user. These algorithms are slow, but incredibly strong and secure.
With biometrics, one concern is something called a false positive rate. That’s how often an invalid user is seen as a valid user, and allowed access. Since most new studies on biometric authentication vary wildly, it’s difficult to determine exactly just how secure most of them are. Biometrics are a great mechanism for identifying you, but a second factor of authentication is needed. Of course, some biometrics sources are far superior than others when it comes to having low false positive rates. For instance, vein recognition technology, which measure vein uniqueness through blood flow, offers a higher level of security than fingerprint identification.
Which is the most promising of these technologies?
The work being done within the realm of biometrics through wearables, embeddables, injectables, and ingestibles, has a lot of promise. Realistically, it’s going to be the wearable devices and computers that maintain short term advances, as anything in the embeddable realm is not really seen as culturally acceptable by most of the population.
I think what we’re going to see are numerous mechanisms around personal identification, which uses a second factor of authentication that is accessible and known to the user, in order to target the username and password for potential dismissal. This realm, and the technology that powers it, is currently being explored in commerce, medical applications, and a number of other industries.
Where else are these technologies being used?
A lot of the work that is being done with the future of biometrics is coming from the commerce and medical industries.
Within PayPal, for instance, we’re working with partners who are building vein recognition technology, heart beat identification bands. We’re also part of the board of the FIDO Alliance, which seeks to create a unified specification for the future of identification. Within the medical industry, we’re seeing embeddable sensors and wearable computers as some of the first human-incorporated technology of a new potential future identity.
Are humans still the weakest link in the chain?
Yes, humans will always be the weakest link because the vast majority will always choose the path of least resistance over the one that provides them the most security. Really, though, technology implementations are just as much to blame in many cases. Secure methodologies, such as using a complex password that is not easy to guess, means that the person has to remember something that is meaningless to them, and it’s much harder to have our brains remember something that has no association to anything else.
Technology such as key managers, and others such as biometrics, are on the right path. The correct solution is to find the most secure way of providing authentication for the user without putting the onus on them for remembering the complexities of that authentication.
When do you see the password dying (if ever)?
The password won’t die, it will just change. Much of the identification technology that is being worked on in internet security, biometrics, or elsewhere, is looking at what a username and password actually are: identification of who you are and verification of that. Biometrics triggered through wearables, embeddables, or ingestible, second factor authentication systems, and many other technologies, are all rising to meet this challenge.”
Microsoft REALLY wants to move you to Windows 10 when it is available!
Myce – By: Jan Willem Aldershoff – “Microsoft has released an optional update that ‘enables additional capabilities for Windows Update notifications when new updates are available to the user’. We discovered the update is actually a downloader for Windows 10 which will notify the user that Microsoft’s upcoming operating system can be downloaded.
Windows Update KB3035583 doesn’t reveal much about itself, only that it adds additional capabilities to Windows Update and applies to computers running Windows 8.1 or Windows 7 Service Pack 1. The update is offered as a recommended update since March 28th and because it’s a recommended update users have to manually put a checkmark next to the update in order to receive it.
Once the update is downloaded it adds a folder to System32 called ‘GWX’ which contains 9 files and a folder called ‘Download’. One of the four .EXE files reveals what the update really is, the description of GWXUXWorker.EXE states, ‘Download Windows 10′. This explains the X in the name, the X is the Roman number 10.
The folder also contains ‘config.xml’ which contains some URLs that at the moment of writing didn’t work. The config file mentions ‘OnlineAdURL’ that points to https://go.microsoft.com/fwlink/?LinkID=526874 and Telemetry BaseURL pointing to http://g.bing.com/GWX/.
The section ‘Phases’ describes how the downloader should behave when the Windows 10 release date nears. Initially, during phase ‘None’, all features are disabled, then during phase ‘AnticipationUX’ advertising banners will be shown, presumably on a homescreen tile and additionally a tray icon will appear.
The next phase is called ‘Reservation’ which according to the config file will show the advertisement tile, the tray icon but also a reservation page. Further phases are the first publication of the final RTM (release to manufacturing), version the general availability (GA) as well as various phases of the upgrade process such as UpgradeDownloadInProgress, UpgradeDownloaded, UpgradeReadyToInstall, UpgradeSetupCompatBlock, UpgradeSetupRolledBack and UpgradeSetupComplete.
It appears Microsoft is serious when it comes to upgrading Windows 7 and Windows 8.1 users to Windows 10. The upgrade will be free in the first year and it appears Microsoft will take that time to convince users to upgrade. Users that don’t want to receive the upgrade ‘advertisements’” should simply not install the recommend update. If Microsoft however decides to make KB3035583 an important update it will install automatically with other Windows update.”
Yes, it has been 40 years… wow! I remember when they were “the new kid on the block.” Boy, am I a geezer!
GeekSnack – By: Jason Moth – Believe it or not, Microsoft turned 40-years old today and although it’s been a while since Bill Gates stepped down as CEO, the co-founder is still very close to the company and its employees. So close in fact, that Gates recently took the time to write a heartwarming letter to the workers in order to mark this very special occasion. After reminiscing a bit about the good old days when he and Paul Allen came up with the idea of putting a computer in every home, Bill Gates quickly moved on to more important things – the future of Microsoft. The company came a long way in its first 40 years, but judging by Bill Gates’ latter, the innovations have only just begun.
‘Under Satya’s leadership, Microsoft is better positioned than ever to lead these advances,’ Bill Gates said in the letter that can be found on Twitter. ‘We have the resources to drive and solve tough problems. We are engaged in every facet of modern computing and have the deepest commitment to research in the industry. In my role as technical advisor to Satya, I get to join product reviews and am impressed by the vision and talent I see. The result is evident in products like Cortana, Skype Translator, and HoloLens — and those are just a few of the many innovations that are on the way.’
Despite some people bashing current CEO Satya Nadella left and right for some of the questionable decisions he took in the past, Bill Gates seems be very confident in his ability of taking Microsoft to the next level. Overall, Gates’ faith in Nadella seems justified if you ask me given that lately we’re seeing more and more great product ideas coming from the Redmond-based company. Most notably, the upcoming Windows 10 operating system already looks very solid and may in fact end up being the best iteration of the OS yet. Not only that, but Windows 10 will even be available as a free upgrade to anyone, including pirates. The fact that Microsoft is extremely interested in everyone using their platforms is becoming increasingly clearer, perhaps even more so than when Bill Gates was in charge.
‘We have accomplished a lot together during our first 40 years and empowered countless businesses and people to realize their full potential. But what matters most now is what we do next. Thank you for helping make Microsoft a fantastic company now and for decades to come, ‘ Bill Gates concluded.
There was the Dominoes robot:
Then CERN Confirmed the Force Exists:
“Researchers at the Large Hadron Collider just recently started testing the accelerator for running at the higher energy of 13 TeV, and already they have found new insights into the fundamental structure of the universe. Though four fundamental forces – the strong force, the weak force, the electromagnetic force and gravity – have been well documented and confirmed in experiments over the years, CERN announced today the first unequivocal evidence for the Force. “Very impressive, this result is,” said a diminutive green spokesperson for the laboratory.
‘The Force is what gives a particle physicist his powers,’ said CERN theorist Ben Kenobi of the University of Mos Eisley, Tatooine. ‘It’s an energy field created by all living things. It surrounds us; and penetrates us; it binds the galaxy together.’
Though researchers are as yet unsure what exactly causes the Force, students and professors at the laboratory have already started to harness its power. Practical applications so far include long-distance communication, influencing minds, and lifting heavy things out of swamps.
Kenobi says he first started teaching the ways of the Force to a young lady who was having trouble revising for her particle-physics exams. ‘She said that I was her only hope,’ says Kenobi. ‘So I just kinda took it from there. I designed an experiment to detect the Force, and passed on my knowledge.’
Kenobi’s seminal paper “May the Force be with EU” – a strong argument that his experiment should be built in Europe – persuaded the CERN Council to finance the installation of dozens of new R2 units for the CERN data centre*. These plucky little droids are helping physicists to cope with the flood of data from the laboratory’s latest experiment, the Thermodynamic Injection Energy (TIE) detector, recently installed at the LHC.
‘We’re very pleased with this new addition to CERN’s accelerator complex,’ said data analyst Luke Daniels of human-cyborg relations. ‘The TIE detector has provided us with plenty of action, and what’s more it makes a really cool sound when the beams shoot out of it.’
But the research community is divided over the discovery. Dark-matter researcher Dave Vader was unimpressed, breathing heavily in disgust throughout the press conference announcing the results, and dismissing the cosmological implications of the Force with the quip ‘Asteroids do not concern me’.
Rumours are growing that this rogue researcher hopes to delve into the Dark Side of the Standard Model, and could even build his own research station some day. With the academic community split, many are tempted by Vader’s invitations to study the Dark Side, especially researchers working with red lasers, and anyone really with an evil streak who looks good in dark robes.”
Then, there was “SmartBox by InBox”:
Then, there was the “Hailo Piggy Back”
Not to mention Google allowing you to play Pacman on Google Maps!
Could the impossible happen? Will Microsoft EVER release Windows to Open Source? I gotta say, I don’t think so.
PC World – By: Mark Hachman – “However unlikely a future in which Microsoft makes Windows open source may sound, Microsoft has already taken considerable strides in that direction.
But instead of allowing developers to make changes to Windows and other products, it’s Microsoft’s fingers at the keyboard.
According to Microsoft Technical Fellow Mark Russinovich, a future that includes an open-source Windows could happen. ‘It’s definitely possible,’ Russinovich reportedly told an audience at the ChefCon conference in Santa Clara this week. ‘It’s a new Microsoft.’
‘Every conversation you can imagine about what should we do with our software—open versus not-open versus services—has happened,’ Russinovich added.
Why this matters: Saturday marks Microsoft’s 40th anniversary. Just a few years ago, such a statement by Russinovich would have been anathema to Microsoft—and if Bill Gates were still at the CEO’s desk, it might have resulted in a letter of termination. But this is the new Microsoft, forced into a spirit of cooperation and collaboration by increasing pressure on the PC and on its business model. This is still pie-in-the-sky stuff—but science fiction can become reality. Just ask Dick Tracy’s watch.
You can’t just toss away $4 billion per quarter
An open-source Windows would be unlikely in the near term, however. That would require Microsoft to expose its reams of code to public view, theoretically allowing developers to create their own proprietary, incompatible forks of Windows. That’s an absolute example, of course—Microsoft could decide to open the code to certain components within the OS—perhaps what will turn into the ‘legacy’ browser, Internet Explorer. But open-sourcing Windows—and perhaps make it free to use—would also require Microsoft to give up a large chunk of the $4 billion or so a quarter it collectively receives from Windows, Windows Phone, and Office licenses.
As Wired points out, Microsoft has agreed to provide OEMs a free copy of Windows for devices with displays under 8 inches. And it’s far more open to running open-source products on top of its Azure cloud services than it was.”
Will ARC lead to one OS for all?
Ars Technica – By: Ron Amadeo – “In September, Google launched ARC—the “App Runtime for Chrome,”—a project that allowed Android apps to run on Chrome OS. A few days later, a hack revealed the project’s full potential: it enabled ARC on every “desktop” version of Chrome, meaning you could unofficially run Android apps on Chrome OS, Windows, Mac OS X, and Linux. ARC made Android apps run on nearly every computing platform (save iOS).
ARC is an early beta though so Google has kept the project’s reach very limited—only a handful of apps have been ported to ARC, which have all been the result of close collaborations between Google and the app developer. Now though, Google is taking two big steps forward with the latest developer preview: it’s allowing any developer to run their app on ARC via a new Chrome app packager, and it’s allowing ARC to run on any desktop OS with a Chrome browser.
ARC runs Windows, Mac, Linux, and Chrome OS thanks to Native Client (abbreviated “NaCL”). NaCL is a Chrome sandboxing technology that allows Chrome apps and plugins to run at “near native” speeds, taking full advantage of the system’s CPU and GPU. Native Client turns Chrome into a development platform, write to it, and it’ll run on all desktop Chrome browsers. Google ported a full Android stack to Native Client, allowing Android apps to run on most major OSes.
With the original ARC release, there was no official process to getting an Android app running on the Chrome platform (other than working with Google). Now Google has released the adorably-named ARC Welder, a Chrome app which will convert any Android app into an ARC-powered Chrome app. It’s mainly for developers to package up an APK and submit it to the Chrome Web Store, but anyone can package and launch an APK from the app directly.
Since anyone can get an app up and running, we decided to take a look at just what ARC was like with certain apps. It turns out ARC is based on Android 4.4 and runs Dalvik VM, not the faster Android Run Time (ART) that debuted in Android 5.0.
A lot of standalone apps, like Twitter, work perfectly, while many stop working because ARC is not a smartphone and is missing a lot of what makes Android Android. Which brings us to the next big improvement:
ARC gets serious with Google Play Services
September’s unofficial hack allowed us to explore a few limitations of the Android Runtime for Chrome. The biggest missing puzzle piece were all of the Google Play components, which weren’t supported on the early version. This made ARC less like “Google’s Android” and more like an unsupported AOSP fork. Any app that used Google Play Services for OAuth logins, Maps, in-app purchases, cloud-to-device messaging, Play Games support, or any of the myriad of other features listed above would simply crash.
With this new release, ARC includes Google Play Services, potentially opening up compatibility for many apps that depend on Google’s proprietary ecosystem APIs. It’s not the full list of APIs from Play Services, though, only a handful: OAuth2, Google Cloud Messaging , Google+ sign-in, Maps, Location, and Ads. Developers have to specifically enable Play Services on ARC with ARC-specific metadata, too, so end users can’t go too crazy with other people’s apps.
While those five APIs are pretty common and will certainly help compatibility, ARC is still missing a big chunk of Play Services, which will stop some apps from working. The biggest missing piece seems to be the Play Store’s in-app purchasing, which isn’t in the API list. The Chrome Web Store supports in app purchasing, but it would require custom code from the app developer.
We can’t explore the full potential of Play Services on ARC, because it’s up to the app developer to add special metadata to the app to enable ARC’s special version of Play Services.
Write once, run anywhere?
So calling all developers: You can now (probably, maybe) run your Android apps on just about anything—Android, Chrome OS, Windows, Mac, and Linux—provided you fiddle with the ARC Welder and submit your app to the Chrome Web Store.
The App Runtime for Chrome and Native Client are hugely important projects because they potentially allow Google to push a “universal binary” strategy on developers. “Write your app for Android, and we’ll make it run on almost every popular OS! (other than iOS)” Google Play Services support is a major improvement for ARC and signals just how ambitious this project is. Some day it will be a great sales pitch to convince developers to write for Android first, which gives them apps on all these desktop OSes for free.
For now though, the project is just a developer preview. The next steps are to bring in the rest of the Play Services APIs, which will no doubt happen over the coming months. Google will also needs to do something about the Chrome Web Store, which isn’t nearly as popular, feature rich, or mature as the Play Store. Will they merge some day? Google already displays Chrome apps in the Google Play Store for Education.”