Google’s China search engine project ‘effectively ended’: report

Google has been forced to shut down and "effectively end" its controversial China search engine project, code-named Project Dragonfly, after members of the company's privacy team raised complaints, according to a new report. The tech giant led by CEO Sundar Pichai was forced to close a data analysis system it was using for the controversial project, … Continue reading “Google’s China search engine project ‘effectively ended’: report”

Google has been forced to shut down and "effectively end" its controversial China search engine project, code-named Project Dragonfly, after members of the company's privacy team raised complaints, according to a new report.

The tech giant led by CEO Sundar Pichai was forced to close a data analysis system it was using for the controversial project, according to The Intercept, citing two sources familiar with the matter. The news outlet originally broke the news that Google had been considering launching the app-based search engine.

GOOGLE HAS NO PLANS TO LAUNCH SEARCH IN CHINA, PICHAI SAYS

When asked for comment, a Google spokesman pointed Fox News to Pichai's comments to Rep. Tom Marino from last week, where Pichai said:

“Right now there are no plans for us to launch a search product in China. We are, in general, always looking to see how best it's part of our core mission and our principles to try hard to provide users with information. We have evidence, based on every country we've operated in, us reaching out and giving users more information has a very positive impact and we feel that calling. But right now there are no plans to launch in China. To the extent we approach a position like that, I will be fully transparent, including with policy makers here, and engage and consult widely.”

Employees of the Mountain View, Calif.-based Google were using Beijing-based website 265.com, which the company bought in 2008 from Chinese billionaire Cai Wensheng, as a sort of market research to see what search queries were being entered. Eventually, the search queries were transferred to Baidu, the leading search engine in China. Google famously pulled out of China in 2010 after it said it would not provide censored search services in the country.

According to the report, engineers who worked on Project Dragonfly were using the data to review a list of websites that Chinese people would see if they entered the same word or phrase into Google. Following that, they checked to see if websites in the search results were blocked due to China's Great Firewall and put together a list of sites that are banned, including Wikipedia, British broadcaster BBC and others. After getting word of this, the company's privacy staff became "really p-ssed" according to one source in The Intercept's story and the engineers working on Project Dragonfly were told they could no longer use the data.

“The 265 data was integral to Dragonfly,” said one source. “Access to the data has been suspended now, which has stopped progress.”

REPORT REVEALS GOOGLE IS TRACKING YOU IF YOU LIKE IT OR NOT

Google CEO Sundar Pichai appears before the House Judiciary Committee to be questioned about the internet giant’s privacy security and data collection, on Capitol Hill in Washington, Tuesday, Dec. 11, 2018.  (AP)

Following the decree from the privacy team, Google employees working on the app-based search engine have used different datasets, including ones from "global Chinese" queries entered into Google that live outside the world's most populous country, including those inside the U.S. and Malaysia. That has made it significantly harder to gather the accuracy of the results, and some team members have left the project, the report added.

A source familiar with the matter told Fox News that has the company has worked on the project and privacy engineers have been brought in, any final launch would be "contingent on a full, final privacy review but," but the company has not yet gotten to that point. The source added that Google's goak to serve its Chinese users has not "dminished" and the company's mission is to "create access for all the world’s information to as many users as possible."

The Dragonfly efforts led to the resignation of several Google employees and prompted more than 700 to sign a letter to Pichai calling for it to be halted last month.

Speaking in front of the House Judiciary Committee amid allegations of anti-conservative bias and privacy violations last week, Pichai said the company's efforts were only an exploration of what a search engine could look like in a country like China.

"Right now, we have no plans to launch [a search product] in China," Pichai said in response to a question from a lawmaker, adding that "getting access to information is an important human right."

Follow Chris Ciaccia on Twitter @Chris_Ciaccia

This story has been updated with a response from a Google spokesperson.

Shocking scale of Russia’s sinister social media campaign against US revealed

Russia's influence campaign during the 2016 presidential election was a sophisticated and multifaceted effort to target the African-American community and sow political division among the public across social media platforms, according to new reports produced for the Senate Intelligence Committee.

One report, which is 100-pages long, provides new context and details regarding the large scope of the multi-year Russian operation and the nefarious tactics it employed to exploit divisions along race and political ideology in the U.S. on a range of social media platforms, including Google-owned YouTube, Facebook, Twitter, and on Facebook-owned Instagram. The shadowy effort aimed to support the Trump campaign, denigrate Hillary Clinton, suppress the vote, sow discord and attack various public figures.

According to the report released on Monday, the massive operation reached 126 million people on Facebook, posted 10.4 million tweets on Twitter, uploaded over 1,000 videos to YouTube, and reached over 20 million users on Instagram. The report states that roughly 6 percent of tweets, 18 percent of Instagram posts and 7 percent of Facebook posts mentioned Trump or Clinton by name. However, Trump was mentioned roughly twice as often as Clinton on most platforms. The report, titled "The Tactics and Tropes of the Internet Research Agency," warns that the manipulation of U.S. political discourse continues in 2018.

The report, which was commissioned by cybersecurity firm New Knowledge, Canfield Research and Columbia University's Tow Center for Digital Journalism, reveals that the Internet Research Agency (IRA), a Russian company owned by a businessman who is reportedly a close ally of Russian President Vladimir Putin, hit on a range of themes and social issues over and over again across multiple online platforms, including Muslim culture, black culture, gun rights, LGBT issues, patriotism, Tea Party issues, veterans' rights, pro-Bernie Sanders and Jill Stein content, Christian culture and Southern culture and American separatist movements.

GOOGLE CEO ON CAPITOL HILL: SOME OF THE WEIRDEST EXCHANGES

However, the report says the most prolific, intense efforts centered on targeting black Americans and appear to have focused on developing audiences in that community and recruiting black Americans as "assets".

"The IRA created an expansive cross-platform media mirage targeting the Black community, which shared and cross-promoted authentic Black media to create an immersive influence ecosystem," the report states. "The IRA exploited the trust of their Page audiences to develop human assets, at least some of whom were not aware of the role they played. This tactic was substantially more pronounced on Black-targeted accounts."

The report also reveals the shocking scale of the disinformation campaign on Instagram, which is owned by Facebook. “Instagram was a significant front in the IRA’s influence operation, something that Facebook executives appear to have avoided mentioning in Congressional testimony,” it says. “There were 187 million engagements on Instagram. Facebook estimated that this was across 20 million affected users. There were 76.5 million engagements on Facebook; Facebook estimated that the Facebook operation reached 126 million people.”

Russian President Vladimir Putin arrives to chair a meeting to discuss preparation to mark the anniversary of the allied victory in the World War II in the Kremlin in Moscow, Russia, Wednesday, Dec. 12, 2018.  (AP)

FACEBOOK'S FALL: FROM THE FRIENDLIEST FACE OF TECH TO PERCEIVED ENEMY OF DEMOCRACY

Researchers also note that in 2017, as the media covered their Facebook and Twitter operations, IRA shifted much of its activity to Instagram. “Instagram engagement outperformed Facebook, which may indicate its strength as a tool in image-centric memetic (meme) warfare. Alternately, it is possible that the IRA’s Instagram engagement was the result of click farms; a few of the provided accounts reference what appears to be a live engagement farm.”

Set against this backdrop, the study warns that Instagram is likely to be a key battleground in the future.

The themes selected by the IRA were "deployed to create and reinforce tribalism within each targeted community," according to the report, which notes that a majority of posts created by a given Facebook page reinforced in-group camaraderie. Partisan content was also presented to targeted groups in on-brand ways: for example, one meme featured Jesus in a Trump campaign hat on an account targeting Christians.

Additionally, the report notes that the IRA co-opted the names of real groups with existing reputations among the targeted communities, including "United Muslims of America," "Cop Block, Black Guns Matter," and "L for Life." Researchers said this was possibly an attempt to loosely backstop an identity if a curious individual did a Google Search, or to piggyback on an established brand.

TWITTER'S RELEASE OF RUSSIAN, IRANIAN INFLUENCE CAMPAIGN TWEETS SHOWS US VULNERABILITY 

The influence campaign began on certain platforms several years ago. The IRA was active on Twitter as early as 2014, prior to their efforts on Facebook and Instagram. However, since the Senate Select Committee on Intelligence only requested data from January 1, 2015, it's possible that some IRA content that appeared on Facebook or Instagram was simply not included in the data provided. The IRA also produced videos across 17 channels on YouTube beginning in September 2015, with most content related to police brutality and the Black Lives Matter movement.

Meanwhile, a second report produced for the Senate Committee also paints a worrying picture of Russia’s influence campaign.

The study by Oxford University’s Computational Propaganda Project and social media analysis specialist Graphika notes the scale of the social media onslaught. “Between 2013 and 2018, the IRA’s Facebook, Instagram, and Twitter campaigns reached tens of millions of users in the United States,” it says. “IRA activities focused on the U.S. began on Twitter in 2013 but quickly evolved into a multi-platform strategy involving Facebook, Instagram, and YouTube amongst other platforms.”

Russia’s attempts to sow discord in society have continued long after the 2016 U.S. presidential election, according to the researchers. “IRA posts on Instagram and Facebook increased substantially after the election, with Instagram seeing the greatest increase in IRA activity.”

Video

FACEBOOK'S TIPPING POINT: TECH GIANT GRAPPLES WITH SLOWING GROWTH, CALLS FOR LEADERSHIP SHAKEUP

A Facebook spokesperson provided Fox News with the following statement:

“Congress and the intelligence community are best placed to use the information we and others provide to determine the political motivations of actors like the Internet Research Agency. We continue to fully cooperate with officials investigating the IRA's activity on Facebook and Instagram around the 2016 election. We've provided thousands of ads and pieces of content to the Senate Select Committee on Intelligence for review and shared information with the public about what we found. Since then, we've made progress in helping prevent interference on our platforms during elections, strengthened our policies against voter suppression ahead of the 2018 midterms, and funded independent research on the impact of social media on democracy.”

A spokesperson from Twitter released the following statement to Fox News:

"Our singular focus is to improve the health of the public conversation on our platform, and protecting the integrity of elections is an important aspect of that mission. We’ve made significant strides since 2016 to counter manipulation of our service, including our release of additional data in October related to previously disclosed activities to enable further independent academic research and investigation.”

A Google spokesperson declined to comment on the reports, although the company has previously described preventing misuse of its platform as a major focus.

"This newly released data demonstrates how aggressively Russia sought to divide Americans by race, religion and ideology, and how the IRA actively worked to erode trust in our democratic institutions," Senate Select Committee on Intelligence Chairman Richard Burr, R-NC, said in a statement. "Most troublingly, it shows that these activities have not stopped."

The committee's vice chairman, Sen. Mark Warner, D-Va., released a statement that read in part:

"These attacks against our country were much more comprehensive, calculating and widespread than previously revealed. That is going to require some much-needed and long-overdue guardrails when it comes to social media.  I hope these reports will spur legislative action in the Congress and provide additional clarity to the American public about Russia’s assault on our democracy.”

The report by New Knowledge, Canfield Research and Columbia University's Tow Center for Digital Journalism, also notes that the efforts by social media platforms to crack down on bots may not be enough.

"Now that automation techniques (e.g. bots) are better policed, the near future will be a return to the past: we’ll see increased human-exploitation tradecraft and narrative laundering," the report states in its conclusion. "We should certainly expect to see recruitment, manipulation, and influence attempts targeting the 2020 election, including the inauthentic amplification of otherwise legitimate American narratives, as well as a focus on smaller/secondary platforms and peer-to-peer messaging services."

Russia has repeatedly denied meddling in the 2016 U.S. presidential election.

Christopher Carbone covers technology and science for Fox News Digital. Tips or story leads: christopher.carbone@foxnews.com. Follow @christocarbone.

Google CEO Sundar Pichai says AI fears are ‘very legitimate’

Google CEO Sundar Pichai said last week that concerns about harmful applications of artificial intelligence are "very legitimate."

In a Washington Post interview, Pichai said that AI tools will need ethical guardrails and will require companies to think deeply about how technology can be abused.

“I think tech has to realize it just can’t build it and then fix it,” Pichai, fresh from his testimony before House lawmakers, said.  “I think that doesn’t work.”

Tech giants have to ensure artificial intelligence with “agency of its own” doesn't harm humankind, Pichai noted.

HERE'S HOW TO BLOCK ROBOCALLS ON IPHONE AND ANDROID

The tech executive, who runs a company that uses AI in many of its products, including its powerful search engine, said he is optimistic about the technology's long-term benefits, but his assessment of AI's potential downsides parallels that of critics who have warned about the potential for misuse and abuse.

Advocates and technologists have been warning about the power of AI to embolden authoritarian regimes, empower mass surveillance and spread misinformation, among other possibilities.

SpaceX and Tesla founder Elon Musk once said that AI could prove to be “far more dangerous than nukes.”

Google's work on Project Maven, a military AI program, sparked a protest from its employees and led the tech giant to announce that it won't continue the work when the contract expires in 2019.

10 IPHONE TRICKS YOU'LL WISH YOU KNEW SOONER

Pichai said in the interview that governments worldwide are still trying to grasp AI’s effects and the potential need for regulation.

“Sometimes I worry people underestimate the scale of change that’s possible in the mid- to long-term, and I think the questions are actually pretty complex,” he told the Post. Other tech companies, such as Microsoft, have embraced regulation of AI — both by the companies that create the technology and the governments that oversee its use.

Google CEO Sundar Pichai appears before the House Judiciary Committee to be questioned about the internet giant’s privacy security and data collection, on Capitol Hill in Washington, Tuesday, Dec. 11, 2018.  (AP)

But AI, if handled properly, could have “tremendous benefits,” Pichai explained, including helping doctors detect eye disease and other ailments through automated scans of health data.

“Regulating a technology in its early days is hard, but I do think companies should self-regulate,” he told the newspaper. “This is why we've tried hard to articulate a set of AI principles. We may not have gotten everything right, but we thought it was important to start a conversation.”

Pichai, who joined Google in 2004 and became chief executive 11 years later, in January called AI “one of the most important things that humanity is working on” and said it could prove to be “more profound” for human society than “electricity or fire.”

However, the race to build machines that can operate on their own has rekindled fears that Silicon Valley’s culture of disruption could result in technology that harms people and eliminates jobs.

Christopher Carbone covers technology and science for Fox News Digital. Tips or story leads: christopher.carbone@foxnews.com. Follow @christocarbone.

Google announces $1B New York campus

Last week, Apple announced plans to invest $1 billion building a new Austin campus that will be home to 5,000 employees initially. This week, Google announced a $1 billion investment to build a new New York City campus called Google Hudson Square.

The new campus has been made possible by signing lease agreements for 315 and 345 Hudson Street as well as a signed letter of intent for 550 Washington Street. In total, this will offer Google an additional 1.7 million square feet of space to house employees. Google will move into the two Hudson buildings beginning in 2020, and if 550 Washington Street is completed then Googlers will move in starting in 2022.

Google already has a big presence in New York with over 7,000 employees located there working on its Search, Ads, Maps, YouTube, Cloud, Technical Infrastructure, Sales, Partnerships, and Research teams. But it's the search giant's intention to double the size of its NYC workforce within 10 years, and for that the company requires even more space.

The last major investment in the city happened back in February when Google acquired the entire Chelsea Market building for $2.4 billion. Before that, Google acquired 111 Eighth Avenue for $1.9 billion and gained control of 2.9 million square feet of usable space.

More From PCmag

  • Does AI Really Speak Our Language?
  • Delivery Robot Catches Fire in California
  • Bing Search Highlights How to Pirate Office 2019
  • Thursday’s Email Bomb Threat Has Ties to Earlier ‘Sextortion’ Scam
  • Google is keen to point out that it isn't just jobs it brings to New York, but lots of investment, too. The company says it has invested over $150 million in grants and employee-matched contributions to nonprofit institutions since 2011. The company also helped bring over 5,000 free Wi-Fi hotspots to the New York Public Library System. This investment is set to continue, especially in the areas of "STEM education, workforce development, and access to technology."

    This article originally appeared on PCMag.com.

    Fox on Tech: Google Plus data breaches causing more headaches

    Google is in the gutter again with users and privacy advocates, in the same week the tech giant's C.E.O is defending the company to skeptical lawmakers on Capitol Hill.

    If you've never heard of Google Plus, you're not alone. The failed social media network had been scheduled to close down in August after failing to make a dent in Facebook's dominance. But now it's going to be shuttered even earlier than planned, due to a security bug leading to a massive leak of some 52 million users' private data. The information breach allowed outside developers to data mine private information about users, even if the account was set to private, including name, email and age, which is more than enough for an experienced thief to use for identity theft. The good news, according to Google: no financial information was released, and so far, the company hasn't seen any evidence that would indicate the information was used illegally. Of course they'll be keeping an eye on it as they wind down Google Plus operation.

    This is actually Google Plus's second major data breach. Back in October, the company said some 500,000 users' data was compromised, which led to the announcement at that time that Google Plus would be going offline in August. However, because of this new breach, the termination date is now being moved up to April.

    The revelation came on Monday, just a day before C.E.O. Sundar Pichai headed to Capitol Hill to face lawmakers for the first time. Most of the hearing focused on allegations of discrimination against conservatives online, but Pichai also spent a significant amount of time defending the company's record on privacy and data protection, telling members of the House Judiciary Committee that "protecting the privacy and security of our users has long been an essential part of our mission. We have invested an enormous amount of work over the years to bring choice, transparency and control to our users." It remains to be seen if Pichai's reassurances are enough to win over a skeptical public, wary of constant data breaches.

    Google hits pause on selling facial recognition tech over abuse fears

    The ethical dilemma swirling around facial recognition technology has prompted Google to hit pause on selling its own system to the public.

    On Thursday, Google's Cloud business said it was holding off on offering a facial recognition system for general-purposes, citing the potential for abuse.

    "We continue to work with many organizations to identify and address these challenges, and unlike some other companies, Google Cloud has chosen not to offer general-purpose facial recognition APIs before working through important technology and policy questions," company Vice President of Global Affairs Kent Walker wrote in a Thursday blog post.

    Walker's statement was likely a subtle jab toward Amazon, which has been offering a facial recognition system to customers, including US law enforcement. Amazon's system, dubbed Rekognition, can identify people's faces in digital images and videos, making it useful for police to quickly look up suspects in criminal investigations. However, civil liberty groups fear the same technology can be abused to power mass surveillance over security cameras to track everyday citizens.

    More From PCmag

  • Taylor Swift Used Facial Recognition to Thwart Stalkers
  • Tumblr’s iOS App Returns to App Store as Porn Ban Looms
  • Find a Lime Scooter, Bike Inside Google Maps App
  • Amazon Patent Tips Doorbell Cams Linked to Photo Databases
  • Walker actually devoted most of the post to the benefits of facial recognition and AI algorithms that can decipher objects in images. For example, Google recently developed an AI model to help eye doctors quickly identify whether their diabetic patients suffered from a complication that can cause permanent blindness if left untreated.

    "Our AI model now detects diabetic retinopathy with a level of accuracy on par with human retinal specialists," Walker wrote. "This means doctors and staff can use this assistive technology to screen more patients in less time, sparing people from blindness through a more timely diagnosis."

    Google wants to bring the benefits of AI-driven technology to everyone, so it plans to continue researching the technology and carry out certain projects in coordination with third-party researchers, non-profits, governments, and businesses. "However, like many technologies with multiple uses, facial recognition merits careful consideration to ensure its use is aligned with our principles and values, and avoids abuse and harmful outcomes," Walker added.

    In June, Google adopted a set of new company principles on AI development that specifically ban the design and deployment of artificial intelligence as a weapon or surveillance tool. This came after Google employees protested the company's involvement in a Pentagon project to use AI to analyze drone footage.

    On Thursday, the American Civil Liberties Union praised Google's decision to refrain from offering a general-purpose facial recognition system. "This is a strong first step," ACLU director Nicole Ozer said in a statement. "Google today demonstrated that, unlike other companies doubling down on efforts to put dangerous face surveillance technology into the hands of law enforcement and ICE, it has a moral compass and is willing to take action to protect its customers and communities."

    Google does currently offer an object-recognition technology called Cloud Vision that can scan images and detect what they depict. But at this point, the system only offers "face detection." It does not support the ability to recognize a face and determine the person it belongs to.

    Meanwhile, rival Microsoft has also been selling a facial recognition system through its Azure platform. However, Microsoft has been outspoken in calling on governments to introduce laws that will regulate the technology before they can be abused on a wide scale.

    This article originally appeared on PCMag.com.

    Google slammed by New Zealand lawmaker after naming suspect in the murder of British backpacker

    Google has been by slammed by a New Zealand lawmaker after the tech giant reportedly published the name of the suspect in the murder of British backpacker Grace Millane.

    The 22-year-old British tourist was murdered earlier this month, according to police. She was staying at a backpacker hostel in Auckland when she went missing Dec. 1. Millane failed to contact her family on her birthday the following day, which alarmed them.

    A week later, police found Millane's body in a forested area not far from the side of the road in the Waitakere Ranges near Auckland.

    BODY BELIEVED TO BE BRITISH BACKPACKER FOUND IN NEW ZEALAND, SUSPECT CHARGED IN MURDER, POLICE SAY

    A 26-year-old man has been charged with Millane’s murder but has not been named.

    The BBC reports that the suspect in the case was granted a “temporary name suppression” while he awaits trial. However, the suspect was named in a mass email sent out earlier this week by Google, according to the New Zealand Herald. The email, which was viewed by the Herald, reportedly named the accused in its subject heading.

    The email was sent out to people signed up to receive information on “what’s trending in New Zealand.”

    New Zealand Justice Minister Andrew Little told the newspaper that publication of the suspect’s details in New Zealand is a breach of the court order. If the breach was linked back to Google infrastructure in New Zealand, the tech giant could be prosecuted, he said.

    AMAZON EXECS GRILLED, JEERED AT NEW YORK CITY COUNCIL HEARING ON HQ2

    Google told the Herald that its initial investigation shows that it did not know about the suppression order. The search giant would comply with any court order it was made aware of, it said.

    The spokesperson said that Google trends alerts are generated automatically by algorithms based on searches in specific geographies over a certain period of time.

    Police have declined to comment on reports that Millane met the man charged with her murder on Tinder.

    The Associated Press contributed to this article.

    Follow James Rogers on Twitter @jamesjrogers

    Google CEO on Capitol Hill: Here are some of the weirdest exchanges

    Google CEO Sundar Pichai's Tuesday appearance on Capitol Hill featured a mix of strange theater and baffling moments as the engineer-turned-chief executive parried questions from lawmakers.

    Although some questions were about privacy and hate speech, the event was dominated primarily by accusations of political bias. At various moments, Pichai fielded questions from House Judiciary Committee lawmakers that either showed their own more Luddite sensibilities or betrayed a misunderstanding of how Google's powerful search engine actually works.

    Rep. Steve King (R.-IA) demanded to see the social media profiles of Google's employees that work on search so that they could be probed for any "built-in bias" against conservatives.

    GOOGLE REVEALS 2018'S TOP SEARCHES

    “There is a very strong conviction on this side of the aisle that the algorithms are written with a bias against conservatives,” King said during the hearing.

    Video

    Pichai explained throughout the hearing that Google's search system is driven by algorithms, constantly improved by human raters who follow strict guidelines and that search cannot be manipulated by rogue employees.

    In another odd exchange, King quizzed Pichai on why his 7-year-old granddaughter saw a picture of him with derogatory language pop up on her iPhone while she was playing a game prior to the November election.

    "Congressman, iPhone is made by a different company," Pichai responded, prompting laughter from some Democrats on the committee.

    The tech CEO then added that he'd be willing to follow up when King claimed it might have been an Android phone.

    Rep. Steve Chabot (R.-OH) complained about having to go to a third or fourth page of search results to find good things about GOP-proposed health care policy or the Republican tax cut bill. When Pichai responded by saying that Google's algorithm reflects what is being said objectively, without regard to partisanship, Chabot disagreed, seeming to imply that a Google employee is pulling strings "Wizard of Oz"-style to influence search.

    "You've got somebody out there" changing search results.

    Pichai said he'd be happy to follow up and explain more about how the process works.

    When a user types a question into Google, the company's software matches the query with terms on the most relevant pages and ranks those pages based on authoritativeness and relevance prior to producing your results, Pandu Nayak, Google’s head of ranking, told Fox News in a recent interview.

    Meanwhile, its algorithms are constantly being improved based on input from about 10,000 search quality raters who conduct thousands of tightly-controlled experiments in accordance with public guidelines.

    RUSSIA THREATENS TO BAN GOOGLE IF IT DOESN'T BAN CERTAIN WEBSITES

    Google CEO Sundar Pichai appears before the House Judiciary Committee to be questioned about the internet giant’s privacy security and data collection, on Capitol Hill in Washington, Tuesday, Dec. 11, 2018. (AP Photo/J. Scott Applewhite)

    Rep. Lamar Smith (R.-TX) began by claiming that Google is censoring conservative views, mentioning a "study" cited by President Trump purporting to show that 96 percent of searches for Trump come from "left-leaning sources." The "study" was rated false by Politifact.

    Pichai — noting that the top news sources on media reflect a diversity of sources — responded by saying that several of Smith's citations were inaccurate and had flaws in its methodology. The exchange continued in that vein, with Smith asking what Pichai would do about "bias."

    “Today we use some very robust methodology, and we have been doing [so] for 20 years. Making sure that results are accurate is what we need to do well and we work hard to do that,” the chief executive said.

    Still, the line of questioning continued.

    Rep. Louie Gohmert (R.-TX) was not happy that the Southern Poverty Law Center, which he said has "stirred up more hate than any other group," is a trusted flagger of content on Google-owned YouTube. The company's trusted flagger program was created to help nongovernmental groups and users flag videos that may violate YouTube's community guidelines; the tech company partners with a wide range of groups in the program.

    Unrelenting, Gohmert said that Pichai is surrounded by "liberality" that "hates conservatives."

    Since the Texas representative didn't ask any questions, Pichai did not respond, although later in the hearing he said, "I’ve communicated clearly that we need to welcome all perspectives at Google."

    After being questioned by Democratic Rep. Jamie Raskin of Maryland about a Washington Post report showing that YouTube is still struggling with conspiracies and hate speech on its platform, Pichai said the company has a responsibility to do more in this area and noted that 400 hours of video are uploaded to YouTube every minute.

    House Majority Leader Kevin McCarthy, R-Calif., left, talks with Rep. Jim Jordan, R-Ohio, before the House Judiciary Committee questions Google CEO Sundar Pichai about the internet giant’s privacy security and data collection, on Capitol Hill in Washington, Tuesday, Dec. 11, 2018. (AP Photo/J. Scott Applewhite)

    A brief exchange with Texas Rep. Ted Poe, also a Republican, encapsulated the hearing's weird vibe.

    “I have an iPhone," Poe said, brandishing the device for all to see. "If I go and sit with my Democratic friends over there, does Google track my movement?"

    When Pichai began to reply, explaining that the answer to Poe's question really depends on settings for location, apps, and privacy configurations, Poe cut him off. “It’s a 'yes' or 'no' question,” he yelled.

    As a factual matter, it was not.

    Lastly, Democratic Congressman Ted Lieu of California excoriated his Republican colleagues over their complaints regarding alleged bias in search results.

    After doing a search in real-time for "Congressman Steve King," and noting that the first result was an ABC News article with a negative tone, Lieu asked Pichai if there were people at Google trying to modify search results for individuals in a political way.

    Pichai reiterated that Google does not manipulate results for people in that way.

    "So let me just conclude here by stating the obvious," Lieu responded. "If you want positive search results, do positive things. If you don't want negative search results, don't do negative things.

    "And to some of my colleagues across the aisle, if you're getting bad press articles and bad search results, don't blame Google or Facebook or Twitter — consider blaming yourself," he added.

    Christopher Carbone covers technology and science for Fox News Digital. Tips or story leads: christopher.carbone@foxnews.com. Follow @christocarbone.

    Google is a tricky case but conservatives please stay strong — Reject the temptation to regulate the internet

    Everyone involved in politics has bad days, when one’s interests conflict with one’s ideals. Some conservatives had a bad day on Tuesday when Google CEO Sundar Pachai appeared before Congress to respond to allegations of anti-conservative bias at Google.

    Since at least the presidency of Ronald Reagan, conservatives have stood for limited, constitutional government. That commitment has not always been easy. Supreme Court Justice Antonin Scalia voted to protect flag burning as free speech even though he hated the desecration of the flag. If conservatives don’t stand strong – even in tough cases – for limited government, who will?

    Content moderation at big tech companies certainly looks like a tough case. On the one hand, conservatives have long supported a free market where entrepreneurs and CEOs, not politicians, decide how to run businesses.

    On the other hand, Mark Zuckerberg, noted earlier this year that the people who work in Silicon Valley generally lean to the left. So do university employees, and conservatives are well aware of the problems posed by the left’s dominance on campuses.

    So conservatives are tempted to use the tools of big government to make sure Google and Facebook don’t restrict speech that their employees do not like. We saw some conservatives giving in to temptation during the Pachai hearings.

    Rep. Mike Johnson, R-La., said Congress should make sure Google’s search “is never used to unfairly censor conservative viewpoints or suppress political views.” I thought the Fairness Doctrine was done away with during the Reagan administration because that conservative president believed in free speech! The conservative ideal of the free market in searches and speech means Mr. Pachai is accountable to his customers – not to Congress.

    Rep. Steve King, R.-Iowa, demanded that Congress have access to the social media history of content moderators at Google. He continued, “If that doesn’t solve this problem, the next step then is to publish the algorithms. If that doesn’t happen, then the next step down the line is Section 230.” (Section 230 of the Communications Decency Act provides liability protections which prevent social media firms from being held legally responsible for user-generated content.)

    Let’s be clear here. Rep. King is saying the federal government should force private individuals to disclose their life online to achieve “fairness.” If that fails, the federal government should take control of private property (the code for Google’s search function) and make it public, thereby destroying much of its value. Finally, if all else fails, Rep. King wants to end that part of current law (Section 230) which experts say has protected speech from suppression by big tech.

    No doubt Reps. King and Johnson give voice to conservatives' fears. Having been excluded from the mainstream media and university campuses, conservatives now see a future of being forced off the online platforms where most political speech takes place.

    But big government is not the answer. As early as 2021, liberals may control both houses of Congress and the presidency. Are they likely to use the federal government to make sure companies are fair to conservatives? Of course not. So why give the federal government such new power over private companies?

    Conservative ideals can still protect conservative interests. At least half of America leans right, a market that Mr. Pachai and other Silicon Valley CEOs will not ignore, whatever their own political commitments. And if some Google employees decide that politics matter more than profits, it is Mr. Pachai’s responsibility, not Congress’, to set matters right on behalf of his shareholders.

    Humans are often tempted to sacrifice their ideals for good reasons and bad. But a market free of government control was a worthy ideal long before Google arose. Indeed, that ideal made Silicon Valley possible.

    Conservatives need to stay the course and reject the temptation of big government regulation of the internet – a temptation that in the end will serve neither their ideals nor their interests.

    John Samples is a vice president at the Cato Institute.

    Here’s some of the weirdest exchanges between Google’s CEO and lawmakers on Capitol Hill

    Google CEO Sundar Pichai's Tuesday appearance on Capitol Hill featured a mix of strange theater and baffling moments as the engineer-turned-chief executive parried questions from lawmakers.

    Although the hearing did feature some questions about privacy and hate speech, it was dominated primarily by accusations of political bias. At various moments, Pichai fielded questions from House Judiciary Committee lawmakers that either showed their own more Luddite sensibilities or betrayed a misunderstanding of how Google's powerful search engine actually works.

    Rep. Steve King (R.-IA) demanded to see the social media profiles of Google's employees that work on search so that they could be probed for any "built-in bias" against conservatives.

    GOOGLE REVEALS 2018'S TOP SEARCHES

    “There is a very strong conviction on this side of the aisle that the algorithms are written with a bias against conservatives,” King said during the hearing.

    Video

    Pichai explained throughout the hearing that Google's search system is driven by algorithms, constantly improved by human raters who follow strict guidelines and that search cannot be manipulated by rogue employees.

    In another odd exchange, King quizzed Pichai on why his 7-year-old granddaughter saw a picture of him with derogatory language pop up on her iPhone while she was playing a game prior to the November election.

    "Congressman, iPhone is made by a different company," Pichai responded, prompting laughter from some Democrats on the committee.

    The tech CEO then added that he'd be willing to follow up when King claimed it might have been an Android phone.

    Rep. Steve Chabot (R.-OH) complained about having to go to a third or fourth page of search results to find good things about GOP-proposed health care policy or the Republican tax cut bill. When Pichai responded by saying that Google's algorithm reflects what is being said objectively, without regard to partisanship, Chabot disagreed, seeming to imply that a Google employee is pulling strings "Wizard of Oz" to influence search.

    "You've got somebody out there" changing search results.

    Pichai said he'd be happy to follow up and explain more about how the process works.

    When a user types a question into Google, the company's software matches the query with terms on the most relevant pages and ranks those pages based on authoritativeness and relevance prior to producing your results, Pandu Nayak, Google’s head of ranking, told Fox News in a recent interview.

    Meanwhile, its algorithms are constantly being improved based on input from about 10,000 search quality raters who conduct thousands of tightly-controlled experiments in accordance with public guidelines.

    RUSSIA THREATENS TO BAN GOOGLE IF IT DOESN'T BAN CERTAIN WEBSITES

    Google CEO Sundar Pichai appears before the House Judiciary Committee to be questioned about the internet giant’s privacy security and data collection, on Capitol Hill in Washington, Tuesday, Dec. 11, 2018. (AP Photo/J. Scott Applewhite)

    Rep. Lamar Smith (R.-TX) began by claiming that Google is censoring conservative views, mentioning a "study" cited by President Trump purporting to show that 96 percent of searches for Trump come from "left-leaning sources." The "study" was rated false by Politifact.

    Pichai — noting that the top news sources on media reflect a diversity of sources — responded by saying that several of Smith's citations were inaccurate and had flaws in its methodology. The exchange continued in that vein, with Smith asking what Pichai would do about "bias."

    “Today we use some very robust methodology, and we have been doing [so] for 20 years. Making sure that results are accurate is what we need to do well and we work hard to do that,” the chief executive said.

    Still, the line of questioning continued.

    Rep. Louie Gohmert (R.-TX) was not happy that the Southern Poverty Law Center, which he said has "stirred up more hate than any other group," is a trusted flagger of content on Google-owned YouTube. The company's trusted flagger program was created to help nongovernmental groups and users flag videos that may violate YouTube's community guidelines; the tech company partners with a wide range of groups in the program.

    Unrelenting, Gohmert said that Pichai is surrounded by "liberality" that "hates conservatives."

    Since the Texas representative didn't ask any questions, Pichai did not respond, although later in the hearing he said, "I’ve communicated clearly that we need to welcome all perspectives at Google."

    After being questioned by Democratic Rep. Jamie Raskin of Maryland about a Washington Post report showing that YouTube is still struggling with conspiracies and hate speech on its platform, Pichai said the company has a responsibility to do more in this area and noted that 400 hours of video are uploaded to YouTube every minute.

    House Majority Leader Kevin McCarthy, R-Calif., left, talks with Rep. Jim Jordan, R-Ohio, before the House Judiciary Committee questions Google CEO Sundar Pichai about the internet giant’s privacy security and data collection, on Capitol Hill in Washington, Tuesday, Dec. 11, 2018. (AP Photo/J. Scott Applewhite)

    A brief exchange with Texas Rep. Ted Poe, also a Republican, encapsulated the hearing's weird vibe.

    “I have an iPhone," Poe said, brandishing the device for all to see. "If I go and sit with my Democratic friends over there, does Google track my movement?"

    When Pichai began to reply, explaining that the answer to Poe's question really depends on settings for location, apps, and privacy configurations, Poe cut him off. “It’s a 'yes' or 'no' question,” he yelled.

    As a factual matter, it was not.

    Lastly, Democratic Congressman Ted Lieu of California excoriated his Republican colleagues over their complaints regarding alleged bias in search results.

    After doing a search in real-time for "Congressman Steve King," and noting that the first result was an ABC News article with a negative tone, Lieu asked Pichai if there were people at Google trying to modify search results for individuals in a political way.

    Pichai reiterated that Google does not manipulate results for people in that way.

    "So let me just conclude here by stating the obvious," Lieu responded. "If you want positive search results, do positive things. If you don't want negative search results, don't do negative things.

    "And to some of my colleagues across the aisle, if you're getting bad press articles and bad search results, don't blame Google or Facebook or Twitter — consider blaming yourself," he added.

    Christopher Carbone covers technology and science for Fox News Digital. Tips or story leads: christopher.carbone@foxnews.com. Follow @christocarbone.