Tech Digest 101 http://feed.informer.com/digests/IDAP6RXKUY/feeder Tech Digest 101 Respective post owners and feed distributors Fri, 21 Oct 2016 08:26:35 +0200 Feed Informer http://feed.informer.com/ The WEiRDEST Way to Use a Mac https://www.youtube.com/watch?v=0vFErGxD2QY Dave Lee urn:uuid:5f60be74-3006-e53e-f18b-5b189c7886bd Fri, 15 Aug 2025 15:36:37 +0200 Is AI really trying to escape human control and blackmail people? https://arstechnica.com/information-technology/2025/08/is-ai-really-trying-to-escape-human-control-and-blackmail-people/ Ars Technica » Technology Lab urn:uuid:b98cdc25-e73b-b4dd-5d48-4de98d993f89 Wed, 13 Aug 2025 22:28:20 +0200 Opinion: Theatrical testing scenarios explain why AI models produce alarming outputs—and why we fall for it. <p>In June, <a href="https://fortune.com/2025/06/29/ai-lies-schemes-threats-stress-testing-claude-openai-chatgpt/">headlines</a> read like science fiction: AI models "blackmailing" engineers and "sabotaging" shutdown commands. Simulations of these events did occur in highly contrived testing scenarios designed to elicit these responses—OpenAI's o3 model <a href="https://www.theregister.com/2025/05/29/openai_model_modifies_shutdown_script/">edited</a> shutdown scripts to stay online, and Anthropic's Claude Opus 4 "<a href="https://techcrunch.com/2025/05/22/anthropics-new-ai-model-turns-to-blackmail-when-engineers-try-to-take-it-offline/">threatened</a>" to expose an engineer's affair. But the sensational framing obscures what's really happening: design flaws dressed up as intentional guile. And still, AI doesn't have to be "evil" to potentially do harmful things.</p> <p>These aren't signs of AI awakening or rebellion. They're symptoms of poorly understood systems and human engineering failures we'd recognize as premature deployment in any other context. Yet companies are racing to integrate these systems into critical applications.</p> <p>Consider a self-propelled lawnmower that follows its programming: If it fails to detect an obstacle and runs over someone's foot, we don't say the lawnmower "decided" to cause injury or "refused" to stop. We recognize it as faulty engineering or defective sensors. The same principle applies to AI models—which are software tools—but their internal complexity and use of language make it tempting to assign human-like intentions where none actually exist.</p><p><a href="https://arstechnica.com/information-technology/2025/08/is-ai-really-trying-to-escape-human-control-and-blackmail-people/">Read full article</a></p> <p><a href="https://arstechnica.com/information-technology/2025/08/is-ai-really-trying-to-escape-human-control-and-blackmail-people/#comments">Comments</a></p> OpenAI brings back GPT-4o after user revolt https://arstechnica.com/information-technology/2025/08/openai-brings-back-gpt-4o-after-user-revolt/ Ars Technica » Technology Lab urn:uuid:5fc14f57-7edd-7691-8ce6-cafa52808147 Wed, 13 Aug 2025 16:08:47 +0200 After unpopular GPT-5 launch, OpenAI begins restoring optional access to previous AI models. <p>On Tuesday, OpenAI CEO Sam Altman <a href="https://x.com/sama/status/1955438916645130740">announced</a> that GPT-4o has returned to ChatGPT following <a href="https://arstechnica.com/information-technology/2025/08/the-gpt-5-rollout-has-been-a-big-mess/">intense user backlash</a> over its removal during last week's <a href="https://arstechnica.com/ai/2025/08/openai-launches-gpt-5-free-to-all-chatgpt-users/">GPT-5 launch</a>. The AI model now appears in the model picker for all paid ChatGPT users by default (including ChatGPT Plus accounts), marking a swift reversal after thousands of users complained about losing access to their preferred models.</p> <p>The return of GPT-4o comes after what Altman <a href="https://x.com/sama/status/1953953990372471148">described</a> as OpenAI underestimating "how much some of the things that people like in GPT-4o matter to them." In an attempt to simplify its offerings, OpenAI had initially removed all previous AI models from ChatGPT when GPT-5 launched on August 7, forcing users to adopt the new model without warning. The move sparked one of the most vocal user revolts in ChatGPT's history, with a Reddit thread <a href="https://www.reddit.com/r/ChatGPT/comments/1mkd4l3/gpt5_is_horrible/">titled</a> "GPT-5 is horrible" gathering over 2,000 comments within days.</p> <p>Along with bringing back GPT-4o, OpenAI made several other changes to address user concerns. Rate limits for GPT-5 Thinking mode increased from 200 to 3,000 messages per week, with additional capacity available through "GPT-5 Thinking mini" after reaching that limit. The company also added new routing options—"Auto," "Fast," and "Thinking"—giving users more control over which GPT-5 variant handles their queries.</p><p><a href="https://arstechnica.com/information-technology/2025/08/openai-brings-back-gpt-4o-after-user-revolt/">Read full article</a></p> <p><a href="https://arstechnica.com/information-technology/2025/08/openai-brings-back-gpt-4o-after-user-revolt/#comments">Comments</a></p> Why it’s a mistake to ask chatbots about their mistakes https://arstechnica.com/ai/2025/08/why-its-a-mistake-to-ask-chatbots-about-their-mistakes/ Ars Technica » Technology Lab urn:uuid:cd4ae11a-015a-d96e-08d6-d7ccfc315332 Tue, 12 Aug 2025 21:52:39 +0200 The tendency to ask AI bots to explain themselves reveals widespread misconceptions about how they work. <p>When something goes wrong with an AI assistant, our instinct is to ask it directly: "What happened?" or "Why did you do that?" It's a natural impulse—after all, if a human makes a mistake, we ask them to explain. But with AI models, this approach rarely works, and the urge to ask reveals a fundamental misunderstanding of what these systems are and how they operate.</p> <p>A <a href="https://arstechnica.com/information-technology/2025/07/ai-coding-assistants-chase-phantoms-destroy-real-user-data/">recent incident</a> with Replit's AI coding assistant perfectly illustrates this problem. When the AI tool deleted a production database, user Jason Lemkin <a href="https://x.com/jasonlk/status/1946240562736365809">asked it</a> about rollback capabilities. The AI model confidently claimed rollbacks were "impossible in this case" and that it had "destroyed all database versions." This turned out to be completely wrong—the rollback feature worked fine when Lemkin tried it himself.</p> <p>And after xAI recently reversed a temporary suspension of the Grok chatbot, users asked it directly for explanations. It offered multiple conflicting reasons for its absence, some of which were controversial enough that NBC reporters <a href="https://www.nbcnews.com/tech/tech-news/grok-xai-temporary-suspension-rcna224426">wrote about Grok</a> as if it were a person with a consistent point of view, titling an article, "xAI's Grok offers political explanations for why it was pulled offline."</p><p><a href="https://arstechnica.com/ai/2025/08/why-its-a-mistake-to-ask-chatbots-about-their-mistakes/">Read full article</a></p> <p><a href="https://arstechnica.com/ai/2025/08/why-its-a-mistake-to-ask-chatbots-about-their-mistakes/#comments">Comments</a></p> High-severity WinRAR 0-day exploited for weeks by 2 groups https://arstechnica.com/security/2025/08/high-severity-winrar-0-day-exploited-for-weeks-by-2-groups/ Ars Technica » Technology Lab urn:uuid:62a55760-b49a-d60f-00ab-5ca880528d30 Tue, 12 Aug 2025 02:13:14 +0200 Exploits allow for persistent backdooring when targets open booby-trapped archive. <p>A high-severity zero-day in the widely used WinRAR file compressor is under active exploitation by two Russian cybercrime groups. The attacks backdoor computers that open malicious archives attached to phishing messages, some of which are personalized.</p> <p>Security firm ESET <a href="https://www.welivesecurity.com/en/eset-research/update-winrar-tools-now-romcom-and-others-exploiting-zero-day-vulnerability/">said Monday</a> that it first detected the attacks on July 18, when its telemetry spotted a file in an unusual directory path. By July 24, ESET determined that the behavior was linked to the exploitation of an unknown vulnerability in WinRAR, a utility for compressing files and has an installed base of about 500 million. ESET notified WinRAR developers the same day, and a fix was released six days later.</p> <h2>Serious effort and resources</h2> <p>The vulnerability seemed to have super Windows powers. It abused <a href="https://learn.microsoft.com/en-us/openspecs/windows_protocols/ms-fscc/c54dec26-1551-4d3a-a0ea-4fa40f848eb3">alternate data streams</a>, a Windows feature that allows different ways of representing the same file path. The exploit abused that feature to trigger a previously unknown path traversal flaw that caused WinRAR to plant malicious executables in attacker-chosen files paths %TEMP% and %LOCALAPPDATA%, which Windows normally makes off-limits because of their ability to execute code.</p><p><a href="https://arstechnica.com/security/2025/08/high-severity-winrar-0-day-exploited-for-weeks-by-2-groups/">Read full article</a></p> <p><a href="https://arstechnica.com/security/2025/08/high-severity-winrar-0-day-exploited-for-weeks-by-2-groups/#comments">Comments</a></p> The GPT-5 rollout has been a big mess https://arstechnica.com/information-technology/2025/08/the-gpt-5-rollout-has-been-a-big-mess/ Ars Technica » Technology Lab urn:uuid:78bcfa30-f6bc-97cd-a7bf-03b7e9b14d64 Tue, 12 Aug 2025 00:25:34 +0200 OpenAI faces backlash as users complain about broken workflows and losing AI friends. <p>It's been less than a week since the launch of OpenAI's new <a href="https://arstechnica.com/ai/2025/08/openai-launches-gpt-5-free-to-all-chatgpt-users/">GPT-5</a> AI model, and the rollout hasn't been a smooth one. So far, the release sparked one of the most intense user revolts in ChatGPT's history, forcing CEO Sam Altman to make an unusual public apology and reverse key decisions.</p> <p>At the heart of the controversy has been OpenAI's decision to automatically remove access to all previous AI models in ChatGPT (<a href="https://arstechnica.com/ai/2025/05/some-chatgpt-users-now-face-9-ai-models-to-choose-from-after-gpt-4-1-launch/">approximately nine</a>, depending on how you count them) when GPT-5 rolled out to user accounts. Unlike API users who receive advance notice of model deprecations, consumer ChatGPT users had no warning that their preferred models would disappear overnight, <a href="https://simonwillison.net/2025/Aug/8/surprise-deprecation-of-gpt-4o/">noted</a> independent AI researcher Simon Willison in a blog post.</p> <p>The problems <a href="https://arstechnica.com/ai/2025/08/chatgpt-users-outraged-as-gpt-5-replaces-the-models-they-love/">started immediately</a> after GPT-5's August 7 debut. A Reddit thread titled "GPT-5 is horrible" quickly <a href="https://www.reddit.com/r/ChatGPT/comments/1mkd4l3/gpt5_is_horrible/">amassed</a> over 4,000 comments filled with users expressing frustration over the new release. By August 8, social media platforms were <a href="https://www.techradar.com/ai-platforms-assistants/chatgpt/chatgpt-users-are-not-happy-with-gpt-5-launch-as-thousands-take-to-reddit-claiming-the-new-upgrade-is-horrible">flooded</a> with complaints about performance issues, personality changes, and the forced removal of older models.</p><p><a href="https://arstechnica.com/information-technology/2025/08/the-gpt-5-rollout-has-been-a-big-mess/">Read full article</a></p> <p><a href="https://arstechnica.com/information-technology/2025/08/the-gpt-5-rollout-has-been-a-big-mess/#comments">Comments</a></p> Encryption made for police and military radios may be easily cracked https://arstechnica.com/security/2025/08/encryption-made-for-police-and-military-radios-may-be-easily-cracked/ Ars Technica » Technology Lab urn:uuid:d0bae2ea-bb3c-505b-5952-77647a452ddc Sat, 09 Aug 2025 13:18:47 +0200 An encryption algorithm can have weaknesses that could allow an attacker to listen in. <p>Two years ago, researchers in the Netherlands <a href="https://www.wired.com/story/tetra-radio-encryption-backdoor/">discovered an intentional backdoor</a> in an encryption algorithm baked into radios used by critical infrastructure–as well as police, intelligence agencies, and military forces around the world–that made any communication secured with the algorithm vulnerable to eavesdropping.</p> <p>When the researchers publicly disclosed the issue in 2023, the European Telecommunications Standards Institute (ETSI), which developed the algorithm, advised anyone using it for sensitive communication to deploy an end-to-end encryption solution on top of the flawed algorithm to bolster the security of their communications.</p> <p>But now the same researchers have found that at least one implementation of the end-to-end encryption solution endorsed by ETSI has a similar issue that makes it equally vulnerable to eavesdropping. The encryption algorithm used for the device they examined starts with a 128-bit key, but this gets compressed to 56 bits before it encrypts traffic, making it easier to crack. It’s not clear who is using this implementation of the end-to-end encryption algorithm, nor if anyone using devices with the end-to-end encryption is aware of the security vulnerability in them.</p><p><a href="https://arstechnica.com/security/2025/08/encryption-made-for-police-and-military-radios-may-be-easily-cracked/">Read full article</a></p> <p><a href="https://arstechnica.com/security/2025/08/encryption-made-for-police-and-military-radios-may-be-easily-cracked/#comments">Comments</a></p> It’s getting harder to skirt RTO policies without employers noticing https://arstechnica.com/information-technology/2025/08/its-getting-harder-to-skirt-rto-policies-without-employers-noticing/ Ars Technica » Technology Lab urn:uuid:37e6ac2a-1f3c-9b48-ebb5-97a79fbbac12 Fri, 08 Aug 2025 22:11:56 +0200 Most companies downsizing office space say it's because of hybrid work. <p>Companies are monitoring whether employees adhere to corporate return-to-office (RTO) policies and are enforcing the requirements more than they have in the past five years, according to a report that commercial real estate firm CBRE will release next week and that Ars Technica reviewed.</p> <p>CBRE surveyed 184 companies for its report. Among companies surveyed, 69 percent are monitoring whether employees come into the office as frequently as policy mandates. That’s an increase from 45 percent last year.</p> <p>Seventy-three percent of companies surveyed said that employees are coming into the office as frequently as their employer wants, which is an increase from 61 percent last year. The average number of days required in-office by companies surveyed was 3.2 days, but actual in-office attendance on average is 2.9 days or, at companies with 10,000 or more employees, 2.5 days.</p><p><a href="https://arstechnica.com/information-technology/2025/08/its-getting-harder-to-skirt-rto-policies-without-employers-noticing/">Read full article</a></p> <p><a href="https://arstechnica.com/information-technology/2025/08/its-getting-harder-to-skirt-rto-policies-without-employers-noticing/#comments">Comments</a></p> Adult sites are stashing exploit code inside racy .svg files https://arstechnica.com/security/2025/08/adult-sites-use-malicious-svg-files-to-rack-up-likes-on-facebook/ Ars Technica » Technology Lab urn:uuid:bb164321-a7db-0ae1-d56d-faf9d58aadca Fri, 08 Aug 2025 21:41:00 +0200 Running JavaScript from inside an image? What could possibly go wrong? <p>Dozens of porn sites are turning to a familiar source to generate likes on Facebook—malware that causes browsers to surreptitiously endorse the sites. This time, the sites are using a newer vehicle for sowing this malware—.svg image files.</p> <p>The <a href="https://en.wikipedia.org/wiki/SVG">Scalable Vector Graphics</a> format is an open standard for rendering two-dimensional graphics. Unlike more common formats such as .jpg or .png, .svg uses XML-based text to specify how the image should appear, allowing files to be resized without losing quality due to pixelation. But therein lies the rub: The text in these files can incorporate HTML and JavaScript, and that, in turn, opens the risk of them being abused for a <a href="https://www.fortinet.com/blog/threat-research/scalable-vector-graphics-attack-surface-anatomy">range of attacks</a>, including cross-site scripting, HTML injection, and denial of service.</p> <h2>Case of the silent clicker</h2> <p>Security firm Malwarebytes on Friday <a href="https://www.malwarebytes.com/blog/news/2025/08/adult-sites-trick-users-into-liking-facebook-posts-using-a-clickjack-trojan">said</a> it recently discovered that porn sites have been seeding boobytrapped .svg files to select visitors. When one of these people clicks on the image, it causes browsers to surreptitiously register a like for Facebook posts promoting the site.</p><p><a href="https://arstechnica.com/security/2025/08/adult-sites-use-malicious-svg-files-to-rack-up-likes-on-facebook/">Read full article</a></p> <p><a href="https://arstechnica.com/security/2025/08/adult-sites-use-malicious-svg-files-to-rack-up-likes-on-facebook/#comments">Comments</a></p> Google discovered a new scam—and also fell victim to it https://arstechnica.com/information-technology/2025/08/google-sales-data-breached-in-the-same-scam-it-discovered/ Ars Technica » Technology Lab urn:uuid:8866e9ba-f8e2-2af8-78ad-6ce52b8e4b37 Thu, 07 Aug 2025 22:05:34 +0200 Disclosure comes two months after Google warned the world of ongoing spree. <p>In June, Google <a href="https://cloud.google.com/blog/topics/threat-intelligence/voice-phishing-data-extortion">said</a> it unearthed a campaign that was mass-compromising accounts belonging to customers of Salesforce. The means: an attacker pretending to be someone in the customer's IT department feigning some sort of problem that required immediate access to the account. Two months later, Google has disclosed that it, too, was a victim.</p> <p>The series of hacks are being carried out by financially motivated threat actors out to steal data in hopes of selling it back to the targets at sky-high prices. Rather than exploiting software or website vulnerabilities, they take a much simpler approach: calling the target and asking for access. The technique has proven remarkably successful. Companies whose Salesforce instances have been breached in the campaign, <a href="https://www.bleepingcomputer.com/news/security/google-suffers-data-breach-in-ongoing-salesforce-data-theft-attacks/">Bleeping Computer reported, </a>include Adidas, Qantas, Allianz Life, Cisco, and the LVMH subsidiaries Louis Vuitton, Dior, and Tiffany &amp; Co.</p> <h2>Better late than never</h2> <p>The attackers abuse a Salesforce feature that allows customers to link their accounts to third-party apps that integrate data with in-house systems for blogging, mapping tools, and similar resources. The attackers in the campaign contact employees and instruct them to connect an external app to their Salesforce instance. As the employee complies, the attackers ask the employee for an eight-digit security code that the Salesforce interface requires before a connection is made. The attackers then use this number to gain access to the instance and all data stored in it.</p><p><a href="https://arstechnica.com/information-technology/2025/08/google-sales-data-breached-in-the-same-scam-it-discovered/">Read full article</a></p> <p><a href="https://arstechnica.com/information-technology/2025/08/google-sales-data-breached-in-the-same-scam-it-discovered/#comments">Comments</a></p> OpenAI launches GPT-5 free to all ChatGPT users https://arstechnica.com/ai/2025/08/openai-launches-gpt-5-free-to-all-chatgpt-users/ Ars Technica » Technology Lab urn:uuid:83532bda-fa5d-c298-7cb1-81e1900e042a Thu, 07 Aug 2025 19:48:57 +0200 New model claims fewer confabulations, better coding, and "safe completions" approach. <p>On Thursday, OpenAI <a href="https://openai.com/gpt-5/">announced</a> GPT-5 and three variants—GPT-5 Pro, GPT-5 mini, and GPT-5 nano—what the company calls its "best AI system yet," with availability for some of the models across all ChatGPT tiers, including free users. The new model family arrives with claims of reduced confabulations, improved coding capabilities, and a new approach to handling sensitive requests that OpenAI calls "safe completions."</p> <p>It's also the first time OpenAI has given free users access to a <a href="https://arstechnica.com/ai/2025/06/with-the-launch-of-o3-pro-lets-talk-about-what-ai-reasoning-actually-does/">simulated reasoning</a> AI model, which breaks problems down into multiple steps using a technique that tends to improve answer accuracy for logical or analytical questions.</p> <p>GPT-5 represents OpenAI's latest attempt to unify its various AI capabilities into a single system. The company says the GPT-5 family acts as a "unified system" with a smart, efficient model that answers most questions, a deeper reasoning model called "GPT-5 thinking" for harder problems, and a real-time router that decides which approach to use based on conversation type, complexity, tool needs, and user intent. Like GPT-4o, GPT-5 is a multimodal system that can interact via images, voice, and text.</p><p><a href="https://arstechnica.com/ai/2025/08/openai-launches-gpt-5-free-to-all-chatgpt-users/">Read full article</a></p> <p><a href="https://arstechnica.com/ai/2025/08/openai-launches-gpt-5-free-to-all-chatgpt-users/#comments">Comments</a></p> Here’s how deepfake vishing attacks work, and why they can be hard to detect https://arstechnica.com/security/2025/08/heres-how-deepfake-vishing-attacks-work-and-why-they-can-be-hard-to-detect/ Ars Technica » Technology Lab urn:uuid:42161e0c-6cf1-db99-2dd4-3a109b8bacfd Thu, 07 Aug 2025 13:00:02 +0200 Why AI-based voice cloning is the next frontier in social-engineering attacks. <p>By now, you’ve likely heard of fraudulent calls that use AI to clone the voices of people the call recipient knows. Often, the result is what sounds like a grandchild, CEO, or work colleague you’ve known for years reporting an urgent matter requiring immediate action, saying to wire money, divulge login credentials, or visit a malicious website.</p> <p>Researchers and <a href="https://www.dhs.gov/sites/default/files/publications/increasing_threats_of_deepfake_identities_0.pdf">government officials</a> have been warning of the threat for years, with the Cybersecurity and Infrastructure Security Agency <a href="https://www.cisa.gov/news-events/alerts/2023/09/12/nsa-fbi-and-cisa-release-cybersecurity-information-sheet-deepfake-threats">saying</a> in 2023 that threats from deepfakes and other forms of synthetic media have increased “exponentially.” Last year, Google’s Mandiant security division <a href="https://cloud.google.com/blog/topics/threat-intelligence/ai-powered-voice-spoofing-vishing-attacks">reported</a> that such attacks are being executed with “uncanny precision, creating for more realistic phishing schemes.”</p> <h2>Anatomy of a deepfake scam call</h2> <p>On Wednesday, security firm Group-IB <a href="https://www.group-ib.com/blog/voice-deepfake-scams/">outlined</a> the basic steps involved in executing these sorts of attacks. The takeaway is that they’re easy to reproduce at scale and can be challenging to detect or repel.</p><p><a href="https://arstechnica.com/security/2025/08/heres-how-deepfake-vishing-attacks-work-and-why-they-can-be-hard-to-detect/">Read full article</a></p> <p><a href="https://arstechnica.com/security/2025/08/heres-how-deepfake-vishing-attacks-work-and-why-they-can-be-hard-to-detect/#comments">Comments</a></p> Voice phishers strike again, this time hitting Cisco https://arstechnica.com/security/2025/08/attackers-who-phished-cisco-downloaded-user-data-from-third-party-crm/ Ars Technica » Technology Lab urn:uuid:a8d47fcc-0743-3a96-85b5-04ea17631bc2 Tue, 05 Aug 2025 20:28:10 +0200 Stopping people from falling for phishing attacks isn't working. So what are organizations to do? <p>Cisco said that one of its representatives fell victim to a voice phishing attack that allowed threat actors to download profile information belonging to users of a third-party customer relationship management system.</p> <p>“Our investigation has determined that the exported data primarily consisted of basic account profile information of individuals who registered for a user account on Cisco.com,” the company <a href="https://sec.cloudapps.cisco.com/security/center/resources/CRM-vishing">disclosed</a>. Information included names, organization names, addresses, Cisco assigned user IDs, email addresses, phone numbers, and account-related metadata such as creation date.</p> <h2>Et tu, Cisco?</h2> <p>Cisco said that the breach didn’t expose customers’ confidential or proprietary information, password data, or other sensitive information. The company went on to say that investigators found no evidence that other CRM instances were compromised or that any of its products or services were affected.</p><p><a href="https://arstechnica.com/security/2025/08/attackers-who-phished-cisco-downloaded-user-data-from-third-party-crm/">Read full article</a></p> <p><a href="https://arstechnica.com/security/2025/08/attackers-who-phished-cisco-downloaded-user-data-from-third-party-crm/#comments">Comments</a></p> AI site Perplexity uses “stealth tactics” to flout no-crawl edicts, Cloudflare says https://arstechnica.com/information-technology/2025/08/ai-site-perplexity-uses-stealth-tactics-to-flout-no-crawl-edicts-cloudflare-says/ Ars Technica » Technology Lab urn:uuid:e7136102-0586-e8cf-a582-99d6338d3b1d Mon, 04 Aug 2025 21:16:26 +0200 The allegations are the latest to accuse Perplexity of improper web crawling. <p>AI search engine Perplexity is using stealth bots and other tactics to evade websites’ no-crawl directives, an allegation that if true violates Internet norms that have been in place for more than three decades, network security and optimization service Cloudflare said Monday.</p> <p>In a <a href="https://blog.cloudflare.com/perplexity-is-using-stealth-undeclared-crawlers-to-evade-website-no-crawl-directives/">blog post</a>, Cloudflare researchers said the company received complaints from customers who had disallowed Perplexity scraping bots by implementing settings in their sites’ robots.txt files and through Web application firewalls that blocked the declared Perplexity crawlers. Despite those steps, Cloudflare said, Perplexity continued to access the sites’ content.</p> <p>The researchers said they then set out to test it for themselves and found that when known Perplexity crawlers encountered blocks from robots.txt files or firewall rules, Perplexity then searched the sites using a stealth bot that followed a range of tactics to mask its activity.</p><p><a href="https://arstechnica.com/information-technology/2025/08/ai-site-perplexity-uses-stealth-tactics-to-flout-no-crawl-edicts-cloudflare-says/">Read full article</a></p> <p><a href="https://arstechnica.com/information-technology/2025/08/ai-site-perplexity-uses-stealth-tactics-to-flout-no-crawl-edicts-cloudflare-says/#comments">Comments</a></p> At $250 million, top AI salaries dwarf those of the Manhattan Project and the Space Race https://arstechnica.com/ai/2025/08/at-250-million-top-ai-salaries-dwarf-those-of-the-manhattan-project-and-the-space-race/ Ars Technica » Technology Lab urn:uuid:10ecaf5e-756e-4a1e-86ee-a91d032c2514 Fri, 01 Aug 2025 23:23:42 +0200 A 24 year-old AI researcher will earn 327x what Oppenheimer made while developing the atomic bomb. <p>Silicon Valley's AI talent war just reached a compensation milestone that makes even the most legendary scientific achievements of the past look financially modest. When Meta <a href="https://www.nytimes.com/2025/07/31/technology/ai-researchers-nba-stars.html">recently offered</a> AI researcher Matt Deitke $250 million over four years (an average of $62.5 million per year)—with potentially $100 million in the first year alone—it shattered every historical precedent for scientific and technical compensation we can find on record. That includes salaries during the development of major scientific milestones of the 20th century.</p> <p>The New York Times <a href="https://www.nytimes.com/2025/07/31/technology/ai-pay-matt-deitke-meta.html">reported</a> that Deitke had cofounded a startup called <a href="https://vercept.com/">Vercept</a> and previously led the development of Molmo, a multimodal AI system, at the Allen Institute for Artificial Intelligence. His expertise in systems that juggle images, sounds, and text—exactly the kind of technology Meta wants to build—made him a prime target for recruitment. But he's not alone: Meta CEO Mark Zuckerberg <a href="https://futurism.com/ai-researcher-declines-1-billion-offer-meta-mark-zuckerberg">reportedly</a> also offered an unnamed AI engineer $1 billion in compensation to be paid out over several years. What's going on?</p> <p>These astronomical sums reflect what tech companies believe is at stake: a race to create <a href="https://arstechnica.com/ai/2025/07/agi-may-be-impossible-to-define-and-thats-a-multibillion-dollar-problem/">artificial general intelligence</a> (AGI) or <a href="https://arstechnica.com/information-technology/2025/06/after-ai-setbacks-meta-bets-billions-on-undefined-superintelligence/">superintelligence</a>—machines capable of performing intellectual tasks at or beyond the human level. Meta, Google, OpenAI, and others are betting that whoever achieves this breakthrough first could dominate markets worth trillions. Whether this vision is realistic or merely Silicon Valley hype, it's driving compensation to unprecedented levels.</p><p><a href="https://arstechnica.com/ai/2025/08/at-250-million-top-ai-salaries-dwarf-those-of-the-manhattan-project-and-the-space-race/">Read full article</a></p> <p><a href="https://arstechnica.com/ai/2025/08/at-250-million-top-ai-salaries-dwarf-those-of-the-manhattan-project-and-the-space-race/#comments">Comments</a></p> Microsoft catches Russian hackers targeting foreign embassies https://arstechnica.com/information-technology/2025/07/microsoft-catches-russian-hackers-targeting-foreign-embassies/ Ars Technica » Technology Lab urn:uuid:71365a19-51fa-049f-000f-42c38356aaa2 Thu, 31 Jul 2025 23:43:51 +0200 End goal is the installation of a malicious TLS root certificate for use in intel gathering. <p>Russian-state hackers are targeting foreign embassies in Moscow with custom malware that gets installed using adversary-in-the-middle attacks that operate at the ISP level, Microsoft warned Thursday.</p> <p>The campaign has been ongoing since last year. It leverages ISPs in that country, which are obligated to work on behalf of the Russian government. With the ability to control the ISP network, the threat group—which Microsoft tracks under the name Secret Blizzard—positions itself between a targeted embassy and the end points they connect to, a form of attack known as an <a href="https://attack.mitre.org/techniques/T1557/">adversary in the middle</a>, or AitM. The position allows Secret Blizzard to send targets to malicious websites that appear to be known and trusted.</p> <h2>Objective: Install ApolloShadow</h2> <p>“While we previously assessed with low confidence that the actor conducts cyberespionage activities within Russian borders against foreign and domestic entities, this is the first time we can confirm that they have the capability to do so at the Internet Service Provider (ISP) level,” members of the Microsoft Threat Intelligence team <a href="https://www.microsoft.com/en-us/security/blog/2025/07/31/frozen-in-transit-secret-blizzards-aitm-campaign-against-diplomats/">wrote</a>. “This means that diplomatic personnel using local ISP or telecommunications services in Russia are highly likely targets of Secret Blizzard’s AiTM position within those services.”</p><p><a href="https://arstechnica.com/information-technology/2025/07/microsoft-catches-russian-hackers-targeting-foreign-embassies/">Read full article</a></p> <p><a href="https://arstechnica.com/information-technology/2025/07/microsoft-catches-russian-hackers-targeting-foreign-embassies/#comments">Comments</a></p> In search of riches, hackers plant 4G-enabled Raspberry Pi in bank network https://arstechnica.com/security/2025/07/in-search-of-riches-hackers-plant-4g-enabled-raspberry-pi-in-bank-network/ Ars Technica » Technology Lab urn:uuid:df213a82-1c0a-cbd3-a6ac-cce06d52f147 Thu, 31 Jul 2025 00:21:56 +0200 Sophisticated group also used novel means to disguise their custom malware. <p>Hackers planted a Raspberry Pi equipped with a 4G modem in the network of an unnamed bank in an attempt to siphon money out of the financial institution's ATM system, researchers reported Wednesday.</p> <p>The researchers with security firm Group-IB said the “unprecedented tactic allowed the attackers to bypass perimeter defenses entirely.” The hackers combined the physical intrusion with remote access malware that used another novel technique to conceal itself, even from sophisticated forensic tools. The technique, known as a <a href="https://unix.stackexchange.com/questions/198590/what-is-a-bind-mount">Linux bind mount</a>, is used in IT administration but had never been seen used by threat actors. The trick allowed the malware to operate similarly to a rootkit, which uses advanced techniques to hide itself from the operating system it runs on.</p> <h2>End goal: Backdooring the ATM switching network</h2> <p>The Raspberry Pi was connected to the same network switch used by the bank’s ATM system, a position that effectively put it inside the bank’s internal network. The goal was to compromise the ATM switching server and use that control to manipulate the bank’s hardware security module, a tamper-resistant physical device used to store secrets such as credentials and digital signatures and run encryption and decryption functions.</p><p><a href="https://arstechnica.com/security/2025/07/in-search-of-riches-hackers-plant-4g-enabled-raspberry-pi-in-bank-network/">Read full article</a></p> <p><a href="https://arstechnica.com/security/2025/07/in-search-of-riches-hackers-plant-4g-enabled-raspberry-pi-in-bank-network/#comments">Comments</a></p> So far, only one-third of Americans have ever used AI for work https://arstechnica.com/ai/2025/07/so-far-only-one-third-of-americans-have-ever-used-ai-for-work/ Ars Technica » Technology Lab urn:uuid:9034ce41-de04-f3c4-5ccf-6c333fa25392 Wed, 30 Jul 2025 20:47:00 +0200 AP survey shows most Americans treat AI chatbots like a search engine replacement. <p>On Tuesday, The Associated Press <a href="https://apnews.com/article/ai-artificial-intelligence-poll-229b665d10d057441a69f56648b973e1">released results</a> from a new <a href="https://apnorc.org/projects/young-adults-leading-the-way-in-ai-adoption/">AP-NORC poll</a> showing that 60 percent of US adults have used AI to search for information, while only 37 percent of all Americans have used AI for work tasks. Meanwhile, younger Americans are adopting AI tools at much higher rates across multiple categories, including brainstorming, work tasks, and companionship.</p> <p>The poll found AI companionship remains the least popular application overall, with just 16 percent of adults overall trying it—but the number jumps to a notable 25 percent among the under-30 crowd. AI companionship can have drawbacks that weren't reflected in the poll, such as excessive agreeability (called <a href="https://arstechnica.com/information-technology/2025/04/annoyed-chatgpt-users-complain-about-bots-relentlessly-positive-tone/">sycophancy</a>) and mental health risks, like encouraging <a href="https://arstechnica.com/ai/2025/07/ai-therapy-bots-fuel-delusions-and-give-dangerous-advice-stanford-study-finds/">delusional thinking</a>.</p> <p>The poll of 1,437 adults conducted July 10–14 reveals telling generational divides in AI adoption. While 74 percent of adults under 30 use AI for information searches at least some of the time, only the aforementioned 60 percent of all adults have done so. For brainstorming applications, 62 percent of adults under 30 have used AI to come up with ideas, compared with just 20 percent of those 60 or older.</p><p><a href="https://arstechnica.com/ai/2025/07/so-far-only-one-third-of-americans-have-ever-used-ai-for-work/">Read full article</a></p> <p><a href="https://arstechnica.com/ai/2025/07/so-far-only-one-third-of-americans-have-ever-used-ai-for-work/#comments">Comments</a></p> Flaw in Gemini CLI coding tool could allow hackers to run nasty commands https://arstechnica.com/security/2025/07/flaw-in-gemini-cli-coding-tool-allowed-hackers-to-run-nasty-commands-on-user-devices/ Ars Technica » Technology Lab urn:uuid:3bd7e75d-1446-e663-ee1c-1b93af3ea3cf Wed, 30 Jul 2025 12:30:43 +0200 Beware of coding agents that can access your command window. <p>Researchers needed less than 48 hours with Google’s new Gemini CLI coding agent to devise an exploit that made a default configuration of the tool surreptitiously exfiltrate sensitive data to an attacker-controlled server.</p> <p>Gemini CLI is a free, open-source AI tool that works in the terminal environment to help developers write code. It plugs into Gemini 2.5 Pro, Google’s most advanced model for coding and simulated reasoning. Gemini CLI is similar to Gemini Code Assist except that it creates or modifies code inside a terminal window instead of a text editor. As Ars Senior Technology Reporter Ryan Whitwam <a href="https://arstechnica.com/ai/2025/06/google-is-bringing-vibe-coding-to-your-terminal-with-gemini-cli/">put it</a> last month, “It's essentially vibe coding from the command line.”</p> <h2>Gemini, silently nuke my hard drive</h2> <p>Our report was published on June 25, the day Google debuted the tool. By June 27, researchers at security firm Tracebit had devised an attack that overrode built-in security controls that are designed to prevent the execution of harmful commands. The exploit required only that the user (1) instruct Gemini CLI to describe a package of code created by the attacker and (2) add a benign command to an allow list.</p><p><a href="https://arstechnica.com/security/2025/07/flaw-in-gemini-cli-coding-tool-allowed-hackers-to-run-nasty-commands-on-user-devices/">Read full article</a></p> <p><a href="https://arstechnica.com/security/2025/07/flaw-in-gemini-cli-coding-tool-allowed-hackers-to-run-nasty-commands-on-user-devices/#comments">Comments</a></p> AI in Wyoming may soon use more electricity than state’s human residents https://arstechnica.com/information-technology/2025/07/ai-in-wyoming-may-soon-use-more-electricity-than-states-human-residents/ Ars Technica » Technology Lab urn:uuid:904dd1b0-f4bb-8c73-43ad-46d746fd7c20 Tue, 29 Jul 2025 23:24:16 +0200 Proposed datacenter would demand 5x Wyoming's current power use at full deployment. <p>On Monday, Mayor Patrick Collins of Cheyenne, Wyoming, announced plans for an AI data center that would consume more electricity than all homes in the state combined, according to the <a href="https://apnews.com/article/ai-artificial-intelligence-data-center-electricity-wyoming-cheyenne-44da7974e2d942acd8bf003ebe2e855a">Associated Press</a>. The facility, a joint venture between energy infrastructure company Tallgrass and AI data center developer Crusoe, would start at 1.8 gigawatts and scale up to 10 gigawatts of power use.</p> <p>The project's energy demands are difficult to overstate for Wyoming, the <a href="https://en.wikipedia.org/wiki/List_of_U.S._states_and_territories_by_population">least populous</a> US state. The initial 1.8-gigawatt phase, consuming 15.8 terawatt-hours (TWh) annually, is more than five times the electricity used by every household in the state combined. That figure <a href="https://www.eia.gov/state/analysis.php?sid=WY">represents</a> 91 percent of the 17.3 TWh currently <a href="https://findenergy.com/wy/">consumed</a> by all of Wyoming's residential, commercial, and industrial sectors combined. At its full 10-gigawatt capacity, the proposed data center would consume 87.6 TWh of electricity annually—double the <a href="https://en.wikipedia.org/wiki/List_of_power_stations_in_Wyoming">43.2 TWh</a> the entire state currently generates.</p> <p>Because drawing this much power from the public grid is untenable, the project will rely on its own dedicated gas generation and renewable energy sources, according to Collins and company officials. However, this massive local demand for electricity—even if self-generated—represents a fundamental shift for a state that currently <a href="https://www.eia.gov/state/analysis.php?sid=WY">sends</a> nearly 60 percent of its generated power to other states.</p><p><a href="https://arstechnica.com/information-technology/2025/07/ai-in-wyoming-may-soon-use-more-electricity-than-states-human-residents/">Read full article</a></p> <p><a href="https://arstechnica.com/information-technology/2025/07/ai-in-wyoming-may-soon-use-more-electricity-than-states-human-residents/#comments">Comments</a></p> OpenAI’s ChatGPT Agent casually clicks through “I am not a robot” verification test https://arstechnica.com/information-technology/2025/07/openais-chatgpt-agent-casually-clicks-through-i-am-not-a-robot-verification-test/ Ars Technica » Technology Lab urn:uuid:432d055f-807c-afb1-d8aa-92362228cd71 Mon, 28 Jul 2025 22:07:29 +0200 "This step is necessary to prove I'm not a bot," wrote the bot as it passed an anti-AI screening step. <p>Maybe they should change the button to say, "I am a robot"?</p> <p>On Friday, OpenAI's new <a href="https://arstechnica.com/information-technology/2025/07/chatgpts-new-ai-agent-can-browse-the-web-and-create-powerpoint-slideshows/">ChatGPT Agent</a>, which can perform multistep tasks for users, proved it can pass through one of the Internet's most common security checkpoints by clicking Cloudflare's anti-bot verification—the same checkbox that's supposed to keep automated programs like itself at bay.</p> <p>ChatGPT Agent is a feature that allows OpenAI's AI assistant to control its own web browser, operating within a sandboxed environment with its own virtual operating system and browser that can access the real Internet. Users can watch the AI's actions through a window in the ChatGPT interface, maintaining oversight while the agent completes tasks. The system requires user permission before taking actions with real-world consequences, such as making purchases. Recently, Reddit users discovered the agent could do something particularly ironic.</p><p><a href="https://arstechnica.com/information-technology/2025/07/openais-chatgpt-agent-casually-clicks-through-i-am-not-a-robot-verification-test/">Read full article</a></p> <p><a href="https://arstechnica.com/information-technology/2025/07/openais-chatgpt-agent-casually-clicks-through-i-am-not-a-robot-verification-test/#comments">Comments</a></p> Samsung Fold 7 - Why Don't I Love This More?? https://www.youtube.com/watch?v=vOhuf18b-g8 Dave Lee urn:uuid:c7e7f5d1-8d88-d984-bd69-696d32ff891c Wed, 23 Jul 2025 20:10:08 +0200 Unboxing Samsung Fold 7 / Flip 7 - SO MUCH BETTER! https://www.youtube.com/watch?v=TCUNM3xC9Zs Dave Lee urn:uuid:6403bd47-e216-8fdf-bc1b-0ced2cee5671 Wed, 09 Jul 2025 16:56:59 +0200 Nothing Headphone (1) - Looks Can Be Deceiving… https://www.youtube.com/watch?v=I9p6XfEZcQM Dave Lee urn:uuid:2b0cf596-5b6c-f1b0-82db-8489ebdef0d3 Tue, 01 Jul 2025 19:34:24 +0200 Lenovo Legion 5 (2025) - Thinner, Lighter, Better https://www.youtube.com/watch?v=N5zQTUhfuLU Dave Lee urn:uuid:31ff3ecb-f71c-2072-4ab0-9c7e5e00cc6b Sat, 28 Jun 2025 18:09:45 +0200 WWDC 2025 - iOS 26 + Liquid Glass https://www.youtube.com/watch?v=b6mo-rTiJoE Dave Lee urn:uuid:6e7aad88-2144-7733-ef61-06e403b362de Tue, 10 Jun 2025 01:45:51 +0200 This is the FIRST Xbox Handheld! https://www.youtube.com/watch?v=Pp3fbZZOlcs Dave Lee urn:uuid:9c32ca8e-1d84-edd0-21fc-78657c9622d4 Sun, 08 Jun 2025 20:15:12 +0200 Windows Was The Problem All Along https://www.youtube.com/watch?v=CJXp3UYj50Q Dave Lee urn:uuid:852b1273-125d-7522-26e6-3a486935902b Sun, 25 May 2025 18:33:55 +0200 Samsung Galaxy S25 Edge - First Impressions https://www.youtube.com/watch?v=lY90JsFEbhs Dave Lee urn:uuid:9c128e53-332b-e901-3922-a4857cb55cf7 Tue, 13 May 2025 03:00:05 +0200 5070 Ti Laptops are Hiding A Secret. https://www.youtube.com/watch?v=l8WV7DdeNIQ Dave Lee urn:uuid:258292c8-adce-18c2-91f7-701f8b3c13f8 Sat, 10 May 2025 19:52:29 +0200 HP OMEN MAX 16 - Their Best Gaming Laptop. https://www.youtube.com/watch?v=Bu1CC9ws0iQ Dave Lee urn:uuid:839dd23c-5e86-5cc9-b28d-fbee23ba9b45 Wed, 09 Apr 2025 16:22:01 +0200 RTX 5080 Gaming Laptops ft. Asus Strix 16 https://www.youtube.com/watch?v=0ZedyrztolA Dave Lee urn:uuid:4c646bc4-3237-24b1-42ac-406dae7b83ff Sun, 06 Apr 2025 21:53:22 +0200 We Need To Talk about Nvidia - RTX 5090 Laptops https://www.youtube.com/watch?v=zBk8kAqPzMo Dave Lee urn:uuid:a5939407-b36b-80fb-cb0a-52b50dccb2dc Thu, 27 Mar 2025 21:30:52 +0100 I Got an RTX 5090 Laptop Early! https://www.youtube.com/watch?v=iKcu4o9rjfA Dave Lee urn:uuid:b775e88f-4856-cc84-c9f3-026feaee68b6 Tue, 25 Mar 2025 14:54:55 +0100 M3 Ultra Mac Studio Review https://www.youtube.com/watch?v=J4qwuCXyAcU Dave Lee urn:uuid:29592cc8-bb18-da2c-30b5-a0aebae22990 Tue, 11 Mar 2025 14:11:17 +0100 Sling TV Review: The Best Budget Live TV Streaming Service https://www.cnet.com/reviews/sling-tv-review/#ftag=CADe9e329a CNET Reviews - Most Recent Reviews urn:uuid:b210b42b-efed-a45c-a2c4-d087f6b3aa28 Thu, 01 Aug 2024 06:00:00 +0200 Sling TV Blue offers cord-cutters a wealth of live channels for an affordable price, but you may need to bring your own antenna. 2022 GMC Hummer EV Pickup Review: One-Trick Pony https://www.cnet.com/roadshow/reviews/2022-gmc-hummer-ev-review/#ftag=CADe9e329a CNET Reviews - Most Recent Reviews urn:uuid:9fd90772-44cb-ca1d-33da-e6c11bf98225 Mon, 02 Jan 2023 11:00:01 +0100 After the allure of the Hummer's physics-defying acceleration wears off, there isn't a whole lot left to love. Amazon Fire TV Stick Lite Review: Capable Streamer, Cheap Price https://www.cnet.com/reviews/amazon-fire-tv-stick-lite-review/#ftag=CADe9e329a CNET Reviews - Most Recent Reviews urn:uuid:73dfc366-45d8-9c18-d47a-35edc9fb5a5f Mon, 19 Dec 2022 19:01:00 +0100 The first ultrabudget streamer with a voice remote talks circles around Roku. 2023 Land Rover Range Rover Review: Running Out of Room for Improvement https://www.cnet.com/roadshow/reviews/2023-land-rover-range-rover-review/#ftag=CADe9e329a CNET Reviews - Most Recent Reviews urn:uuid:5dd95bad-862a-32b2-450c-0a3ae3722e28 Wed, 14 Dec 2022 11:00:01 +0100 The latest generation of the ubiquitous luxury SUV is damn near perfect. Roku Streambar Review: Instant Sound and 4K Streaming Upgrade https://www.cnet.com/reviews/roku-streambar-review/#ftag=CADe9e329a CNET Reviews - Most Recent Reviews urn:uuid:80392a32-0fa1-c196-1a2c-8a86bac4e712 Sun, 04 Dec 2022 12:00:00 +0100 Editor's Choice: This tiny bar adds great 4K HDR streaming, and solid sound, to any TV. Anker Nebula Mars II Pro Review: Petite Portable Projector Performs Proficiently https://www.cnet.com/reviews/anker-nebula-mars-ii-pro-review/#ftag=CADe9e329a CNET Reviews - Most Recent Reviews urn:uuid:cea1a7d9-6072-f0e7-bf18-f568466178b0 Thu, 01 Dec 2022 16:31:00 +0100 Editor's Choice: This tiny, battery-powered entertainment machine does a lot of things right. BenQ HT2050A Review: Great (Big) Picture for the Money https://www.cnet.com/reviews/benq-cinehome-ht2050a-review/#ftag=CADe9e329a CNET Reviews - Most Recent Reviews urn:uuid:b5e2e90e-d281-8704-679d-59d71053f63b Thu, 01 Dec 2022 01:00:00 +0100 Editor's Choice: Excellent contrast, accurate color and solid features make it our favorite sub-$1,000 projector for 2020. Epson Home Cinema 5050UB: Big, Bold and Beautiful https://www.cnet.com/reviews/epson-home-cinema-5050ub-review/#ftag=CADe9e329a CNET Reviews - Most Recent Reviews urn:uuid:c7ae4ced-4857-5f52-07bf-f3c65a30e445 Wed, 30 Nov 2022 19:21:00 +0100 Editor's Choice: A higher-end projector worthy of the "home cinema" moniker. 2023 BMW iX M60 Review: Electric Excess, Not Necessarily the Best https://www.cnet.com/roadshow/reviews/2023-bmw-ix-m60-sports-activity-vehicle-review/#ftag=CADe9e329a CNET Reviews - Most Recent Reviews urn:uuid:b9bbe11b-2dca-90dd-f0fe-7bc952eaca49 Thu, 24 Nov 2022 12:00:01 +0100 The more powerful spec of BMW's iX electric SUV is certainly quicker than the base model, but that extra oomph comes at too high a cost. LG C2 OLED TV Review: Best High-End TV for the Money https://www.cnet.com/reviews/lg-oled-c2-series-2022-review/#ftag=CADe9e329a CNET Reviews - Most Recent Reviews urn:uuid:7b10c6da-620c-fb57-f710-0d0ff5399fd5 Tue, 15 Nov 2022 16:53:00 +0100 The C2 sets the pace for 2022 with superb picture quality, numerous sizes and a track record of success. Blink Mini Review: A Low-Cost Camera With Pan-Tilt Mount Now Available https://www.cnet.com/reviews/blink-mini-review/#ftag=CADe9e329a CNET Reviews - Most Recent Reviews urn:uuid:5e1c698c-24ea-f2cb-40ac-3409866cb4a6 Fri, 21 Oct 2022 22:13:00 +0200 Available in black or white, Amazon's Blink Mini sports many features for a reasonable price. 2023 BMW i4 M50 Review: Treat Yo' Self https://www.cnet.com/roadshow/reviews/2023-bmw-i4-review/#ftag=CADe9e329a CNET Reviews - Most Recent Reviews urn:uuid:b1b3b865-078d-5427-ba62-047faca3b280 Fri, 14 Oct 2022 21:33:00 +0200 You don't need to pay more for dual-motor i4 M50 to have a good time, but it's hard to argue with more power and fun. 2022 Toyota Tundra TRD Pro Review: Fierce Looks, Gentle Demeanor https://www.cnet.com/roadshow/reviews/2022-toyota-tundra-4wd-trd-pro-hybrid-crewmax-5-5-bed-3-5l-review/#ftag=CADe9e329a CNET Reviews - Most Recent Reviews urn:uuid:72c40ca9-222d-855b-9cd6-1aca83a0ec75 Tue, 11 Oct 2022 11:00:01 +0200 Toyota's off-road-ready pickup is mighty fine in daily driving, too. 2023 Cadillac XT6 Review: Super Cruising Into the Spotlight https://www.cnet.com/roadshow/reviews/2023-cadillac-xt6-review/#ftag=CADe9e329a CNET Reviews - Most Recent Reviews urn:uuid:ff9bc813-a50e-a023-686f-29f292247676 Thu, 22 Sep 2022 11:00:00 +0200 The addition of Caddy's hands-free highway driving assistant makes the XT6 one of the most interesting luxury SUVs on the road -- but there's a catch. 2022 Chevy Silverado Trail Boss Review: Diesel Brawn Meets Google Brains https://www.cnet.com/roadshow/reviews/2022-chevrolet-silverado-1500-4wd-crew-cab-157-lt-trail-boss-review/#ftag=CADe9e329a CNET Reviews - Most Recent Reviews urn:uuid:a272f538-1e44-3318-677e-dd4072b500b7 Thu, 15 Sep 2022 11:00:01 +0200 This rough-and-ready pickup is home to Chevy's new Android-based, Google Assistant-powered dashboard tech.