Is Google “Shaping” What is Considered “Fair?”

Uneven ScalesInsider Blows Whistle and Exec Reveals Google Plan to Prevent “Trump Situation” in 2020 on Hidden Cam

This video is on BitChute, because it can’t be on Google’s service, YouTube, obviously. The idea is, Google, inside the walls of their offices, call something “fair” ONLY if it matches their OWN political opinion. Whether you support President Trump or not, you should find this VERY disturbing. Next time it may be YOUR candidate, or cause, that is not “fair” according to those in powers’ opinion.

This organization is a very “right” leaning organization called “Project Veritas,” which stands for “Project Truth.” They use hidden camera videos to expose the “internals” of many organizations, like Planned Parenthood. Now, they are exposing Google and it’s alleged left-leaning political agenda.

Again, you may be “left leaning” yourself, and believe this is all great! But, the shoe can always end up on the other foot! This is why we must be FREE to express all opinions. Freedom of speech is just that, freedom! Then, you make up your OWN mind about what is correct, and truthful.

Google’s AI Can Cause Problems

User Opinion“AI,” or “Artificial Intelligence” drives Google products, like YouTube, but it has issues! Are “highly engaged,” and therefore, opinion-driven hyper-users, “shaping” the results of content?

The Toxic Potential of YouTube’s Feedback Loop

Wired – By: Guillaume Chaslot – “From 2010 TO 2011, I worked on YouTube’s artificial intelligence recommendation engine – the algorithm that directs what you see next based on your previous viewing habits and searches. One of my main tasks was to increase the amount of time people spent on YouTube. At the time, this pursuit seemed harmless. But nearly a decade later, I can see that our work had unintended – but not unpredictable – consequences. In some cases, the AI went terribly wrong.

Artificial intelligence controls a large part of how we consume information today. In YouTube’s case, users spend 700,000,000 hours each day watching videos recommended by the algorithm. Likewise, the recommendation engine for Facebook’s news feed drives around 950,000,000 hours of watch time per day.

In February, a YouTube user named Matt Watson found that the site’s recommendation algorithm was making it easier for pedophiles to connect and share child porn in the comments sections of certain videos. The discovery was horrifying for numerous reasons. Not only was YouTube monetizing these videos, its recommendation algorithm was actively pushing thousands of users toward suggestive videos of children.

When the news broke, Disney and Nestlé pulled their ads off the platform. YouTube removed thousands of videos and blocked commenting capabilities on many more.

Unfortunately, this wasn’t the first scandal to strike YouTube in recent years. The platform has promoted terrorist content, foreign state-sponsored propaganda, extreme hatred, softcore zoophilia, inappropriate kids content, and innumerable conspiracy theories.

Having worked on recommendation engines, I could have predicted that the AI would deliberately promote the harmful videos behind each of these scandals. How? By looking at the engagement metrics.

Anatomy of an AI Disaster

Using recommendation algorithms, YouTube’s AI is designed to increase the time that people spend online. Those algorithms track and measure the previous viewing habits of the user – and users like them – to find and recommend other videos that they will engage with.

In the case of the pedophile scandal, YouTube’s AI was actively recommending suggestive videos of children to users who were most likely to engage with those videos. The stronger the AI becomes – that is, the more data it has – the more efficient it will become at recommending specific user-targeted content.

Here’s where it gets dangerous: As the AI improves, it will be able to more precisely predict who is interested in this content; thus, it’s also less likely to recommend such content to those who aren’t. At that stage, problems with the algorithm become exponentially harder to notice, as content is unlikely to be flagged or reported. In the case of the pedophilia recommendation chain, YouTube should be grateful to the user who found and exposed it. Without him, the cycle could have continued for years.

But this incident is just a single example of a bigger issue.

How Hyper-Engaged Users Shape AI

Earlier this year, researchers at Google’s Deep Mind examined the impact of recommender systems, such as those used by YouTube and other platforms. They concluded that ‘feedback loops’ in recommendation systems can give rise to ‘echo chambers’ and ‘filter bubbles,’ which can narrow a user’s content exposure and ultimately shift their worldview.’

The model didn’t take into account how the recommendation system influences the kind of content that’s created. In the real world, AI, content creators, and users heavily influence one another. Because AI aims to maximize engagement, hyper-engaged users are seen as ‘models to be reproduced.’ AI algorithms will then favor the content of such users.

The feedback loop works like this: (1) People who spend more time on the platforms have a greater impact on recommendation systems. (2) The content they engage with will get more views/likes. (3) Content creators will notice and create more of it. (4) People will spend even more time on that content. That’s why it’s important to know who a platform’s hyper-engaged users are: They’re the ones we can examine in order to predict which direction the AI is tilting the world.

More generally, it’s important to examine the incentive structure underpinning the recommendation engine. The companies employing recommendation algorithms want users to engage with their platforms as much and as often as possible because it is in their business interests. It is sometimes in the interest of the user to stay on a platform as long as possible—when listening to music, for instance – but not always.

We know that misinformation, rumors, and salacious or divisive content drives significant engagement. Even if a user notices the deceptive nature of the content and flags it, that often happens only after they’ve engaged with it. By then, it’s too late; they have given a positive signal to the algorithm. Now that this content has been favored in some way, it gets boosted, which causes creators to upload more of it. Driven by AI algorithms incentivized to reinforce traits that are positive for engagement, more of that content filters into the recommendation systems. Moreover, as soon as the AI learns how it engaged one person, it can reproduce the same mechanism on thousands of users.

Even the best AI of the world—the systems written by resource-rich companies like YouTube and Facebook – can actively promote upsetting, false, and useless content in the pursuit of engagement. Users need to understand the basis of AI and view recommendation engines with caution. But such awareness should not fall solely on users.

In the past year, companies have become increasingly proactive: Both Facebook and YouTube announced they would start to detect and demote harmful content.

But if we want to avoid a future filled with divisiveness and disinformation, there’s much more work to be done. Users need to understand which AI algorithms are working for them, and which are working against them.”

Using Social Media, Not Letting It Use You!

Social MediaHave you ever thought about how you use social media? Do you have a daily habit of checking Facebook, or Twitter, or Instagram? Is this necessarily bad in and of itself? It doesn’t have to be, as long as it doesn’t become an addiction! You know that you are addicted if you have a “gnawing feeling” that you are missing out if you don’t check your social media accounts several times every day! Have there ever been several days in which you have not checked your social media? Maybe you got too busy, you had things going on in your life, perhaps or you just didn’t feel like you had the time to sit down and check your social media accounts. If so, you may be the exception and not the rule! It is surprising how many people can’t let a day go by without checking their social media!

This can be a form of addiction. Now I’m not necessarily saying that you will go into some kind of sweating, shaking withdrawal if you don’t check your social media accounts. But just that little “gnawing” in the back of your mind may be an indication that addiction is a possibility! The bottom line is, I want to use social media, but not let it use me!

Another problem to consider is that if you get all your information, or even a large portion of your information, from social media, are you thinking about it critically? I have mentioned in the last several articles the need for critical thinking. It is not enough to know that we need to think about what we see, what we read, and what we hear through media. We need to always stop and ask ourselves several questions. The key questions are: “Who wrote what I am reading?” Not who sent it. Not who posted it. Because that could have been a close friend, someone you know and trust. But, we need to ask, “Where did they get it?” Are they just mindlessly forwarding something they saw that caught their attention briefly but have no idea as to the source of the information. As an example, let’s say a close, trusted, friend posts an article that indicates that a celebrity has died. That celebrity is one of your favorite actors. You then post on your timeline how much you regret that actor’s death. Or, you simply share the article that your friend posted about the actor’s death. Then you find out that the actor in question is still alive! You feel kind of silly. That’s a fairly harmless example of the problem that were talking about. But what if what you read in various posts, even from trusted friends, about an issue that is even more important, or more critical? You could pass along information, the origin of which you did not know, the contents of which haven’t been confirmed. That piece of information could be read by someone that influences them to think a certain way, to perceive the world a certain way, and may even drive them to some form of action that you never thought of, or intended! Finally ask yourself, “Is someone trying to influence me with this information?” Don’t be a “lemming” that jumps off into the sea because all the other lemmings are jumping! Think for yourself!

You see the process that is troubling in this scenario. Blindly posting, or sharing, information can create situations that have dire consequences! This is a sad fact, but one that we need to take to heart. If you see something online, no matter the source, check out the facts for yourself. Don’t let someone influence you with a random post, check out the source, check out the motive, find out the facts for yourself!

The Case for DuckDuckGo!

DuckDuckGoMany people that are concerned about their online security and privacy while doing Internet searches are switching to an alternative search engine called “DuckDuckGo.” They are located at On their promotion page, they ask that users of the service spread information about why their friends should use the DuckDuckGo search engine. They say that “Friends don’t let friends get tracked!” They also remind users that they need to tell their friends that Google tracks you, and DuckDuckGo does not. Search should remain private and should not be targeted by advertisers. DuckDuckGo actively blocks Google’s hidden trackers, and Google trackers work on 75% of the top million websites! They indicate that their unbiased results are outside of the “filter bubble” that is typified by Google.

DuckDuckGo is committed to unbiased search that’s never based on your search history, and they ask that you spread the word that we all should stand up for a pro-privacy business model in the field of Internet search. This would be a distinct alternative to Google’s “collect-it-all” business model! The gist of it is that no one else should own your data! It is your data and you need to protect it. This is the thinking behind DuckDuckGo. It is a privately held Internet company dedicated to empowering the user to take control over their personal information online without trade-offs.

Maybe it is time to consider this alternative to the all-powerful Google!

The Issue With Google’s Control of Search

Google: Don't Be Evil!As you know my buddy that I used to work with that I call “The Other Computer Curmudgeon” has been sending me all kinds of information about how Google is trying to take over the world! Now, of course, some people think he is a little crazy, but they think that about me as well, so it works out!

To that end, Politico magazine online had a story in 2015 entitled, “How Google Could Rig the 2016 election.” Now, of course, this is old news, as we are well past 2016, but I think that what it’s talking about could in fact be used to sway future elections as well. The author, Robert Epstein, writes in this article that he had been directing research into Google and its ability to control opinions and beliefs based on its search algorithms. What you search for and what the responses are of great importance in how you proceed to perceive an issue.

Google has the ability, perhaps more than any other company in history to control, or shift, voting preferences of undecided voters. Mr. Epstein indicates that in his view they would have virtually no knowledge that they’re being manipulated by the search results they see.

He then goes on to point out that because many elections are won by very small margins, this would give Google the power to flip upwards of 25% of national elections in counties worldwide. His example is that in the United States half of our presidential elections in the past have been won by margins under 7.6%.

The fact that this could be done without the knowledge of the people doing searches using the Google search engine, to me, is what is most insidious in this scenario. Because our school systems have done a very poor job in teaching people critical thinking for themselves; most people tend to just surf the web, do searches in the search engines, and assume that what they’re finding online IS accurate information. They don’t question the source, or the motivation, of the people that are posting such information on the Internet.

Because of this, there is a real problem that people will be swayed by what they find on the Google search engine, Facebook, Twitter, and even Instagram and Snapchat; that influences them without their even knowing that they have been influenced. There’s something about seeing an article written on a computer screen that tends to make people think it must be true! This is a sad fact, but all you have to do is critically look at a pharmaceutical advertisement on TV and what they are saying, as well as what they are NOT saying, that would give you an understanding, or pretext, about the drug they’re trying to sell you. Because make NO mistake, the reason they’re running that advertisement IS to sell you their latest drug! That’s just one example of the kind of influence that I’m talking about.

This is even more critical, in a venue where people make the assumption that if they look at a search engine the people providing the search engine have no “skin in the game” when it comes to providing a search response. That is, in fact, the way it should be, but unfortunately it is not the way it is! Google does have a political agenda. That’s why as my buddy, “The Other Computer Curmudgeon” says we need to be very careful of Google, and what it represents, in our searches, and how it impacts the direction of our thinking. The key here is to be a critical thinker! Think about what the person is saying, how they’re saying it, what their motivation behind it is in saying it; and how they are trying to influence you through what they’re writing, or presenting to you. If you don’t see opposing views, you could be swayed!

Frustrating Voicemail Scam

Warning - Scam Alert!I just posted this on Facebook:

“As most of you know, I run a tech show on YouTube called Dr. Bill.TV | The Computer Curmudgeon. I’m going to be talking about this on my show, but I wanted to put this on Facebook as well because it is SO frustrating! I have been hit twice today with a new phone scam that I have just been made aware of. Perhaps you’ve already had this happen, and if you haven’t it probably will soon! It begins on your cell phone with a voicemail message. The key here is that the phone NEVER rings. You are just notified that you have a voicemail waiting. When you listen to the voicemail you are told some story, the two I heard today were both different; and you are asked to call a telephone number, at which point they probably scam you to no end! I, of course, did not call the numbers that I was given and told to call. The correct response is simply to delete the phonemail, and go on about your business.

What is so frustrating here is that no call blocking software blocks this yet. I have several on my phone and neither stopped it. I understand that there is legislation being considered to stop this practice.

This is a case of tech development gone bad. They are developing nuisance tech to try to reach us with their scams, and this one is particularly insidious! More to come on my show about tech gone bad this weekend!”

Elon Musk Wants to Wire-Up Your Brain!

Elon Musk This would give whole new meaning to Blue Screens of Death! Yikes! And, would you want someone to be able to hack your brain?!

Elon Musk is making implants to link the brain with a smartphone

CNN – By: Michael Scaturro – “London (CNN Business) Elon Musk wants to insert Bluetooth-enabled implants into your brain, claiming the devices could enable telepathy and repair motor function in people with injuries.

Speaking on Tuesday, the CEO of Tesla (TSLA) and SpaceX said his Neuralink devices will consist of a tiny chip connected to 1,000 wires measuring one-tenth the width of a human hair.

The chip features a USB-C port, the same adapter used by Apple’s (AAPL) Macbooks, and connects via Bluebooth to a small computer worn over the ear and to a smartphone, Musk said.

‘If you’re going to stick something in a brain, you want it not to be large,’ Musk said, playing up the device’s diminutive size.
Neuralink, a startup founded by Musk, says the devices can be used by those seeking a memory boost or by stroke victims, cancer patients, quadriplegics or others with congenital defects.

The company says up to 10 units can be placed in a patient’s brain. The chips will connect to an iPhone app that the user can control.
The devices will be installed by a robot built by the startup. Musk said the robot, when operated by a surgeon, will drill 2 millimeter holes in a person’s skull. The chip part of the device will plug the hole in the patient’s skull.

‘The interface to the chip is wireless, so you have no wires poking out of your head. That’s very important,’ Musk added.
Trials could start before the end of 2020, Musk said, likening the procedure to Lasik eye correction surgery, which requires local anesthetic.

Musk has said this latest project is an attempt to use artificial intelligence (AI) to have a positive effect on humanity. He has previously tried to draw attention to AI’s potential to harm humans.

He has invested some $100 million in San Francisco-based Neuralink, according to the New York Times.

Musk’s plan to develop human computer implants comes on the heels of similar efforts by Google (GOOGL) and Facebook (FB). But critics aren’t so sure customers should trust tech companies with data ported directly from the brain.

‘The idea of entrusting big enterprise with our brain data should create a certain level discomfort for society,’ said Daniel Newman, principal analyst at Futurum Research and co-author of the book Human/Machine.

‘There is no evidence that we should trust or be comfortable with moving in this direction,’ he added.

While the technology could help those with some type of brain injury or trauma, ‘Gathering data from raw brain activity could put people in great risk, and could be used to influence, manipulate and exploit them,’ Frederike Kaltheuner of Privacy International told CNN Business. ‘Who has access to this data? Is this data shared with third parties? People need to be in full control over their data.’

The tech industry is coming under heightened scrutiny over how it handles data.

France fined Google parent company Alphabet in January for violating EU online privacy rules. Facebook reportedly faces a major fine in the United States over its own data privacy violations.

Tesla has also suffered data leaks. In 2018, researchers at security firm RedLock said Tesla’s cloud storage was breached to mine cryptocurrency.”

Linux Drops the Floppy Disk!

You knew this had to happen eventually. Who uses a floppy anymore!?

Retrotechtacular: The Floppy Disk Orphaned By Linux

Retrotechtacular: The Floppy Disk Orphaned by Linux

HackaDay – By: Jenny List – “About a week ago, Linus Torvalds made a software commit which has an air about it of the end of an era. The code in question contains a few patches to the driver for native floppy disc controllers. What makes it worthy of note is that he remarks that the floppy driver is now orphaned. Its maintainer no longer has working floppy hardware upon which to test the software, and Linus remarks that ‘I think the driver can be considered pretty much dead from an actual hardware standpoint’, though he does point out that active support remains for USB floppy drives.

It’s a very reasonable view to have arrived at because outside the realm of retrocomputing the physical rather than virtual floppy disk has all but disappeared. It’s well over a decade since they ceased to be fitted to desktop and laptop computers, and where once they were a staple of any office they now exist only in the ‘save” icon on your wordprocessor. The floppy is dead, and has been for a long time.

Still, Linus’ quiet announcement comes as a minor jolt to anyone of A Certain Age for whom the floppy disk and the computer were once inseparable. When your digital life resided not in your phone or on the cloud but in a plastic box of floppies, those disks meant something. There was a social impact to the floppy as well as a technological one, they were a physical token that could contain your treasured ephemeral possessions, a modern-day keepsake locket for the digital age. We may have stopped using them over a decade ago, but somehow they are still a part of our computing DNA.

So while for some of you the Retrotechtacular series is about rare and unusual technology from years past, it’s time to take a look at something ubiquitous that we all think we know. Where did the floppy disk come from, where is it still with us, and aside from that save icon what legacies has it bestowed upon us?


Computers of the 1950s and 1960s had typically been room-sized machines, and even though by the end of the ’60s a typical minicomputer had shrunk to the size of a cabinet it would still have retained some of the attributes of its larger brethren. Removable storage media were paper tapes and cards, or bulky magnetic disk packs and reels of tape.

The impending arrival of the desktop computer at the dawn of the 1970s demanded not only a higher capacity but also more convenience in the storage media for these new machines. It was IBM who would provide the necessary technology in the form of an 8-inch disk that they had developed for loading microcode onto their System/370 mainframes. Their patent for a single-sided disc with a capacity of 80kB had been filed in December 1969, and was granted in June 1972. 8-inch disk drives were produced by IBM and other manufacturers in a variety of formats with increasing capacities over the 1970s, and became a common sight attached to both minicomputers and desktop machines in that decade. Many consumers would have had their first glimpse of a floppy disk in this period courtesy of an 8-inch drive on a CP/M machine in their workplace, and they became for a while symbolic of a high-tech future.

The basic design of a flexible magnetic disk in a plastic wallet with a fabric liner was soon miniaturised, with the company formed by former IBM staffer Alan Shugart producing the 5.25′ format in 1976. This was visibly a shrunken 8′ disk, but its increased portability and convenience led to its rapid adoption. When IBM’s PC made its debut in 1981 it was the obvious choice, achieving mass-market ubiquity until it was slowly displaced by Sony’s 1981 launch of the 3.5′ hard-cased format.


This Disgo-branded 32Mb Flash drive cost me a small fortune back in about 2001, but meant I could carry a load of floppies-worth of data in a much more convenient form.
This Disgo-branded 32MB Flash drive cost me a small fortune back in about 2001, but meant I could carry a load of floppies-worth of data in a much more convenient form.
It is an inevitability that any dominant technology will in due course be usurped, but why did the floppy fade away so quickly over the end of the 1990s? Was it the thirst for extra capacity that couldn’t be satisfied by expanded density drives or by expensive new formats such as Iomega’s Zip drive? Or was it simply superseded by a better technology such as the CD-ROM or the USB Flash drive? It’s more likely that both of these and more contributed to the format’s decline in popularity.

There was a time when a boot floppy was an essential tool in the armory of anybody working with computers, but as the CD and USB drive took over that function we said good riddance and no longer had to pray our boot floppies hadn’t lost a sector. The arrival of much more convenient free cloud services with significant storage — the launch of Gmail in 2004 comes to mind — sounded the death-knell for the floppy. If you bought a computer with a floppy drive installed after about 2005 you were in a minority, and in 2019 they retain a tenuous existence as an external peripheral with a USB interface. Perhaps most tellingly, an Amazon search reveals boxes of ten floppies selling for around $15, what was once a commodity item has crossed into being an expensive oddity.

The floppy drive has left us, but what legacies do we retain from it? Perhaps the most obvious is in every desktop computer, the size of the floppy drive standardized the size of the drive bay, which in turn dictated the size of other devices designed to be put into drive bays. And of course we’ll always have the glamorization of the floppy in movies from the era, like the corny-is-cool scene with a 3.5′ in 1999’s Office Space or the use of an 8′ in 1983’s War Games.

We’ll leave you with a video, showing an automated production line for 3.5 inch floppy disks. We see all the constituent parts including tiny pieces such as the write-protect slider and the head shutter spring, coming together on a beautiful piece of production line automation. A surprise is that the shell is assembled before the disk itself is slipped in from one end. If you still use floppies for something other than retrocomputing, we’d love to hear from you in the comments.”

UEFI Secure Boot Added to VirtualBox

VirtualBox on Linux has a new feature!

VirtualBox 6.0.10 Adds UEFI Secure Boot Driver Signing Support on Ubuntu and Debian

VirtualBox 6.0.10 comes more than two months after the previous maintenance release with some notable changes for Linux-based operating systems, especially Ubuntu and Debian GNU/Linux hosts, which received support for UEFI Secure Boot driver signing. Additionally, Linux hosts got better support for various kernels on Debian GNU/Linux and Fedora systems. It also fixes focus grabbing issues reported by users when building VirtualBox from sources using recent versions of the Qt application framework. The Linux guests support was improved as well in this release with fixes for udev rules for guest kernel modules, which now take effect in time, and the ability to remember the guest screen size after a guest reboot.

Humans Listen to Google Assistant, Too!

Google Assistant MiniMy buddy, that I call the “Other Computer Curmudgeon,” warned us… there ARE humans listening as well!

Yep, human workers are listening to recordings from Google Assistant, too

The Verge – By: James Vincent – “A report from Belgian public broadcaster VRT NWS has revealed how contractors paid to transcribe audio clips collected by Google’s AI assistant can end up listening to sensitive information about users, including names, addresses, and details about their personal lives.

It’s the latest story showing how our interactions with AI assistants are not as private as we may like to believe. Earlier this year, a report from Bloomberg revealed similar details about Amazon’s Alexa, explaining how audio clips recorded by Echo devices are sent without users’ knowledge to human contractors, who transcribe what’s being said in order to improve the company’s AI systems.

Worse, these audio clips are often recorded entirely by accident. Usually, AI assistants like Alexa and Google Assistant only start recording audio when they hear their wake word (eg, ‘Okay Google’), but these reports show the devices often start recording by mistake.

In the story by VRT NWS, which focuses on Dutch and Flemish speaking Google Assistant users, the broadcaster reviewed a thousand or so recordings, 153 of which had been captured accidentally. A contractor told the publication that he transcribes around 1,000 audio clips from Google Assistant every week. In one of the clips he reviewed he heard a female voice in distress and said he felt that ‘physical violence’ had been involved. ‘And then it becomes real people you’re listening to, not just voices,’ said the contractor.

Tech companies say that sending audio clips to humans to be transcribed is an essential process for improving their speech recognition technology. They also stress that only a small percentage of recordings are shared in this way. A spokesperson for Google told Wired that just 0.2 percent of all recordings are transcribed by humans, and that these audio clips are never presented with identifying information about the user.

However, that doesn’t stop individuals revealing sensitive information in the recording themselves. And companies are certainly not upfront about this transcription process. The privacy policy page for Google Home, for example, does not mention the company’s use of human contractors, or the possibility that Home might mistakenly record users.

These obfuscations could cause legal trouble for the company, says Michael Veale, a technology privacy researcher at the Alan Turing Institute in London. He told Wired that this level of disclosure might not meet the standards set by the EU’s GDPR regulations. ‘You have to be very specific on what you’re implementing and how,’ said Veale. ‘I think Google hasn’t done that because it would look creepy.’

In a blog post published later in the day, Google defended its practice of using human employees to review Assistant audio conversations. The company says it applies ‘a wide range of safeguards to protect user privacy throughout the entire review process,’ and it does this review work to improve the Assistant’s natural language processing and its support for multiple languages. But Google also owned up to the failure of those safeguards in the case of the Belgian contract worker who provided the audio to VRT NWS, breaking the company’s data security and privacy rules in the process.

‘We just learned that one of these language reviewers has violated our data security policies by leaking confidential Dutch audio data,’ writes David Monsees, a product manager on the Google Search team who authored the blog post. ‘Our Security and Privacy Response teams have been activated on this issue, are investigating, and we will take action. We are conducting a full review of our safeguards in this space to prevent misconduct like this from happening again.’

Update 7/11, 6:33PM ET: Added information and comment from Google’s blog post published in response to the VRT NWS report.”

1 2 3 220