I think it is funny that she doesn’t know what a “Red Hat” is. Hello? Red Hat Linux? Some confusion here, but ya gotta love the pink motif!
Can YOU think of ways to use this in a security/social engineering scenario?
This is an “ooopsie” for sure!
Gizmodo – By: Tom McKay – “A serious flaw in Google Keystone, which controls Chrome updates, is capable of doing major damage to macOS file systems on some computers and has been linked to data corruption that struck Hollywood video editors and others on Monday evening, Variety reported.
Initially, blame for the corrupted file systems was largely directed at Avid and its Media Composer software, which was identified as a common link by film and TV editors who said they could not reboot their Mac Pros after shutdown. But on Tuesday evening, Google told users via its support forums that it had ‘recently discovered that a Chrome update may have shipped with a bug that damages the file system on MacOS machines’ and ‘paused the release while we finalize a new update that addresses the problem.’
According to 9to5Google, what actually happened is that version 126.96.36.199 of the Keystone software shipped with an update that damages the macOS filesystem when System Integrity Protection (SIP)—a security measure that keeps unauthorized software from modifying protected data—had been disabled or is not installed (versions of OS X predating the 2015 El Capitan update). News of the real cause of the issue was first identified by the Mr. Macintosh blog.
‘If you have not taken steps to disable System Integrity Protection and your computer is on OS X 10.9 or later, this issue cannot affect you,’ Google said in its support note.
Apparently, video editors may have raised the alarm about the issue first because disabling SIP is a requirement to run third-party graphics cards. Variety reported that “dozens of machines at multiple studios” were disabled, including the entire video editing team working on ABC’s Modern Family.”
These are pirate services that re-show streams from Dish Network or DirectTV. If you subscribe to one, be aware!
Cord Cutters News – By: Luke Bouma – “This week we saw a major network of IPTV services with over 50 million customers get shut down by a police action in Europe. Now Hollywood and the MPAA are getting into the action as they use copyright claims to take down multiple IPTV services.
According to a report from TorrentFreak, multiple IPTV service domains including BestTVStreams, OneStepTV, TVStreamsNow, and DoozerIPTV have all been seized by the MPAA, now called MPA America, for copyright infringement.
Other services like XCaliberTV have started to inform their customers that their service was also shut down due to a copyright infringement claim.
These are just a few of the IPTV services that have been quietly taken offline in the last few weeks. For years, IPTV and streaming services have promised a huge collection of TV channels for crazy low rates. Now Hollywood has taken notice and after a few early legal wins against companies like Set TV NOW and the Dragon Box have decided to go full speed ahead in their efforts to shut down what they see as pirate services.
What is strange here is ACE and the MPA America are usually very vocal when they shut down an IPTV or pirate box service. Yet both have been silent about their recent successes. Typically, ACE and MPA America try to make examples out of services.
Over the last week more than 50 million IPTV customers have lost their service. Many have been left to wonder if they will get money back because they had prepaid for a year or more. Multiple Cord Cutters News readers have said when they asked about refunds their messages have been left unreturned. Service social media accounts have also been deleted.
This is likely just the start if the reports coming out of Europe are to be believed. According to police, during their raid on Xstream-codes.com they got the names of over 5,000 resellers. Many are predicting that information will be used to go after current resellers and if companies like Dish have their way they will go after IPTV subscribers.
We have also seen the US Department of Justice go after streaming services they say are pirate services. iStreamItAll was once one of the most popular private Roku Channels before it was removed. Now iStreamItAll’s website has been seized by the FBI following a Grand Jury indictment of the owner of the service.
Dish has won several lawsuits in Puerto Rico targeting resellers of IPTV services winning $412,500 and $305,000 in damages not from the people who owned the IPTV services but resellers.
What has been made clear recently is that a massive amount of resources are being spent to shut down pirate services with IPTV being a major focus of Hollywood. When you see numbers like 50 million subscribers you can see why.”
You will still be able to use YouTube on your PC, but the “lean-back” mode for some players will no longer work. You will have to use a special TV friendly app.
Cord Cutters News – By: Luke Bouma “If you had a Fire TV for the last year or so you have likely used the YouTube Web interface. This special version of YouTube’s website lets Web browsers on your TV easily use YouTube.com and have it look and act a lot like the YouTube app for TVs.
Now YouTube has started to warn users that its Web-based TV interface will soon be going away. Its now directing anyone who wants to watch YouTube on their TV to get the app version of YouTube. (Hopefully, your device has a YouTube app for your TV.)
This news comes after YouTube made changes to its back end last week that broke many third-party YouTube apps. Many of these apps quickly found workarounds, but it seems that YouTube is working on something.
The question now is are these changes an effort to force people to use the YouTube app or is YouTube trying to prepare for a roll out for a few updates that are in the works?
Recently we saw Hulu end support for several older legacy devices as it prepared to roll out its new app with a new user interface and a traditional grid guide. For now, we will have to wait to see if YouTube is doing this as the first step of something larger or just ending support for something that is only used by a small number of users.”
This article from TechCrunch says it all:
TechCrunch – By: Zack Whittaker – “Microsoft has warned Windows users to install an “emergency” out-of-band security patch.
The software giant said in an advisory that a security flaw in some versions of Internet Explorer could allow an attacker to remotely run malicious code on an affected device. A user could be stealthily infected by visiting a malicious web page or by being tricked into clicking on a link in an email.
“An attacker who successfully exploited the vulnerability could take control of an affected system,” said Microsoft.
Microsoft said the vulnerability was under active exploitation, though details of the flaw had not been made public.
More than 7% of all browser users are running affected versions of Internet Explorer 9, 10 and 11, according to recent data. All supported versions of Windows are affected, including Windows 7, Windows 8.1 and Windows 10, as well as several Windows Server versions.
Most users can install the patches using Windows Update.
Microsoft also issued a fix for its in-built malware scanner Windows Defender, which if exploited could have triggered a denial-of-service condition resulting in the app failing to work.
The company said no action was required by users to remediate the bug in Windows Defender.
It’s rare but not unheard of for Microsoft to release emergency security patches outside of its typical monthly patching cycle. The company typically releases security fixes in the second week of each month on its so-called Patch Tuesday, but also will release fixes for significant vulnerabilities under active exploitation as soon as they are made available.
Homeland Security warned in its own advisory urging affected users to install the patches.”
This video is on BitChute, because it can’t be on Google’s service, YouTube, obviously. The idea is, Google, inside the walls of their offices, call something “fair” ONLY if it matches their OWN political opinion. Whether you support President Trump or not, you should find this VERY disturbing. Next time it may be YOUR candidate, or cause, that is not “fair” according to those in powers’ opinion.
This organization is a very “right” leaning organization called “Project Veritas,” which stands for “Project Truth.” They use hidden camera videos to expose the “internals” of many organizations, like Planned Parenthood. Now, they are exposing Google and it’s alleged left-leaning political agenda.
Again, you may be “left leaning” yourself, and believe this is all great! But, the shoe can always end up on the other foot! This is why we must be FREE to express all opinions. Freedom of speech is just that, freedom! Then, you make up your OWN mind about what is correct, and truthful.
“AI,” or “Artificial Intelligence” drives Google products, like YouTube, but it has issues! Are “highly engaged,” and therefore, opinion-driven hyper-users, “shaping” the results of content?
Wired – By: Guillaume Chaslot – “From 2010 TO 2011, I worked on YouTube’s artificial intelligence recommendation engine – the algorithm that directs what you see next based on your previous viewing habits and searches. One of my main tasks was to increase the amount of time people spent on YouTube. At the time, this pursuit seemed harmless. But nearly a decade later, I can see that our work had unintended – but not unpredictable – consequences. In some cases, the AI went terribly wrong.
Artificial intelligence controls a large part of how we consume information today. In YouTube’s case, users spend 700,000,000 hours each day watching videos recommended by the algorithm. Likewise, the recommendation engine for Facebook’s news feed drives around 950,000,000 hours of watch time per day.
In February, a YouTube user named Matt Watson found that the site’s recommendation algorithm was making it easier for pedophiles to connect and share child porn in the comments sections of certain videos. The discovery was horrifying for numerous reasons. Not only was YouTube monetizing these videos, its recommendation algorithm was actively pushing thousands of users toward suggestive videos of children.
When the news broke, Disney and Nestlé pulled their ads off the platform. YouTube removed thousands of videos and blocked commenting capabilities on many more.
Unfortunately, this wasn’t the first scandal to strike YouTube in recent years. The platform has promoted terrorist content, foreign state-sponsored propaganda, extreme hatred, softcore zoophilia, inappropriate kids content, and innumerable conspiracy theories.
Having worked on recommendation engines, I could have predicted that the AI would deliberately promote the harmful videos behind each of these scandals. How? By looking at the engagement metrics.
Anatomy of an AI Disaster
Using recommendation algorithms, YouTube’s AI is designed to increase the time that people spend online. Those algorithms track and measure the previous viewing habits of the user – and users like them – to find and recommend other videos that they will engage with.
In the case of the pedophile scandal, YouTube’s AI was actively recommending suggestive videos of children to users who were most likely to engage with those videos. The stronger the AI becomes – that is, the more data it has – the more efficient it will become at recommending specific user-targeted content.
Here’s where it gets dangerous: As the AI improves, it will be able to more precisely predict who is interested in this content; thus, it’s also less likely to recommend such content to those who aren’t. At that stage, problems with the algorithm become exponentially harder to notice, as content is unlikely to be flagged or reported. In the case of the pedophilia recommendation chain, YouTube should be grateful to the user who found and exposed it. Without him, the cycle could have continued for years.
But this incident is just a single example of a bigger issue.
How Hyper-Engaged Users Shape AI
Earlier this year, researchers at Google’s Deep Mind examined the impact of recommender systems, such as those used by YouTube and other platforms. They concluded that ‘feedback loops’ in recommendation systems can give rise to ‘echo chambers’ and ‘filter bubbles,’ which can narrow a user’s content exposure and ultimately shift their worldview.’
The model didn’t take into account how the recommendation system influences the kind of content that’s created. In the real world, AI, content creators, and users heavily influence one another. Because AI aims to maximize engagement, hyper-engaged users are seen as ‘models to be reproduced.’ AI algorithms will then favor the content of such users.
The feedback loop works like this: (1) People who spend more time on the platforms have a greater impact on recommendation systems. (2) The content they engage with will get more views/likes. (3) Content creators will notice and create more of it. (4) People will spend even more time on that content. That’s why it’s important to know who a platform’s hyper-engaged users are: They’re the ones we can examine in order to predict which direction the AI is tilting the world.
More generally, it’s important to examine the incentive structure underpinning the recommendation engine. The companies employing recommendation algorithms want users to engage with their platforms as much and as often as possible because it is in their business interests. It is sometimes in the interest of the user to stay on a platform as long as possible—when listening to music, for instance – but not always.
We know that misinformation, rumors, and salacious or divisive content drives significant engagement. Even if a user notices the deceptive nature of the content and flags it, that often happens only after they’ve engaged with it. By then, it’s too late; they have given a positive signal to the algorithm. Now that this content has been favored in some way, it gets boosted, which causes creators to upload more of it. Driven by AI algorithms incentivized to reinforce traits that are positive for engagement, more of that content filters into the recommendation systems. Moreover, as soon as the AI learns how it engaged one person, it can reproduce the same mechanism on thousands of users.
Even the best AI of the world—the systems written by resource-rich companies like YouTube and Facebook – can actively promote upsetting, false, and useless content in the pursuit of engagement. Users need to understand the basis of AI and view recommendation engines with caution. But such awareness should not fall solely on users.
In the past year, companies have become increasingly proactive: Both Facebook and YouTube announced they would start to detect and demote harmful content.
But if we want to avoid a future filled with divisiveness and disinformation, there’s much more work to be done. Users need to understand which AI algorithms are working for them, and which are working against them.”
Have you ever thought about how you use social media? Do you have a daily habit of checking Facebook, or Twitter, or Instagram? Is this necessarily bad in and of itself? It doesn’t have to be, as long as it doesn’t become an addiction! You know that you are addicted if you have a “gnawing feeling” that you are missing out if you don’t check your social media accounts several times every day! Have there ever been several days in which you have not checked your social media? Maybe you got too busy, you had things going on in your life, perhaps or you just didn’t feel like you had the time to sit down and check your social media accounts. If so, you may be the exception and not the rule! It is surprising how many people can’t let a day go by without checking their social media!
This can be a form of addiction. Now I’m not necessarily saying that you will go into some kind of sweating, shaking withdrawal if you don’t check your social media accounts. But just that little “gnawing” in the back of your mind may be an indication that addiction is a possibility! The bottom line is, I want to use social media, but not let it use me!
Another problem to consider is that if you get all your information, or even a large portion of your information, from social media, are you thinking about it critically? I have mentioned in the last several articles the need for critical thinking. It is not enough to know that we need to think about what we see, what we read, and what we hear through media. We need to always stop and ask ourselves several questions. The key questions are: “Who wrote what I am reading?” Not who sent it. Not who posted it. Because that could have been a close friend, someone you know and trust. But, we need to ask, “Where did they get it?” Are they just mindlessly forwarding something they saw that caught their attention briefly but have no idea as to the source of the information. As an example, let’s say a close, trusted, friend posts an article that indicates that a celebrity has died. That celebrity is one of your favorite actors. You then post on your timeline how much you regret that actor’s death. Or, you simply share the article that your friend posted about the actor’s death. Then you find out that the actor in question is still alive! You feel kind of silly. That’s a fairly harmless example of the problem that were talking about. But what if what you read in various posts, even from trusted friends, about an issue that is even more important, or more critical? You could pass along information, the origin of which you did not know, the contents of which haven’t been confirmed. That piece of information could be read by someone that influences them to think a certain way, to perceive the world a certain way, and may even drive them to some form of action that you never thought of, or intended! Finally ask yourself, “Is someone trying to influence me with this information?” Don’t be a “lemming” that jumps off into the sea because all the other lemmings are jumping! Think for yourself!
You see the process that is troubling in this scenario. Blindly posting, or sharing, information can create situations that have dire consequences! This is a sad fact, but one that we need to take to heart. If you see something online, no matter the source, check out the facts for yourself. Don’t let someone influence you with a random post, check out the source, check out the motive, find out the facts for yourself!