The Flea FM

20/07/2024
If you missed Laurel Windwood talk about elderly care, Flea.fm will replay the interview this week!
20/07/2024

If you missed Laurel Windwood talk about elderly care, Flea.fm will replay the interview this week!

19/07/2024

Within hours of mass IT outages on Friday, a trove of new domains started popping up online. The one common factor? The name CrowdStrike, the company at the center of the global tech outage that’s delayed flights and disrupted emergency services.

Many websites appear to promise help. Names include crowdstriketoken.com, crowdstrikedown.site, crowdstrikefix.com and fix-crowdstrike-bsod.com, according to a UK-based cybersecurity researcher who specializes in monitoring credential phishing.

The new domains are poised to encourage people desperate to get their systems back up and running to click on malicious links. Gullible and panicked people present ripe targets for scammers. While attempts to set up phishing sites in the wake of a big event is nothing new, the scale of Friday's outages means that there is a very wide field of potential victims.

A few sites, crowdstrike-helpdesk.com, microsoftcrowdstrike.com and crowdstrikeclaim were still under construction at the time of writing, according to the researcher, who asked that his name be withheld because of concerns that hackers might target him.

He told me he started looking around midday in the UK, and saw the new domains registered as early as 4:12 a.m. EDT. He’s found 28 sites so far, he said.

Websites take a few hours to construct, he told me by phone. He said he expected most of them were likely opportunistic actors trying to take advantage of CrowdStrike’s woes.

On Friday, the US Cybersecurity and Infrastructure Security Agency said it has already observed threat actors taking advantage of this incident for phishing and other malicious activity, urging people to avoid clicking suspicious links.

George Kurtz, chief executive officer of CrowdStrike, warned affected customers in a post on X to “ensure they’re communicating with CrowdStrike representatives through official channels,” adding his team is fully mobilized to ensure the security and stability of their customers.

Bryan Palma, chief executive officer of Trellix, told me his company is inundated with calls from CrowdStrike customers seeking help to get back online. Downed computers were safe, he said, because hackers can’t breach a bricked device. “Those machines are the safest machines on the planet because they’re not connected to anything,” he told me.

Rebooting in safe mode will protect people, he said, but getting back up online brings other risks too. That spans not only phishing campaigns but also the risk that hackers will scout customers who might disable key CrowdStrike protections in the maelstrom.

Azim Shukuhi, a security researcher at Cisco Talos, warned that customers would “most likely disable or modify their CrowdStrike protections” as they start to recover in a post on X.

Some may even abandon the company altogether in a fit of concern, or pique. For instance, Tesla CEO Elon Musk posted on his social media site X on Friday, “We just deleted CrowdStrike from all our systems.”

From Bloomberg

If your business has been impacted by the CrowdStrike Outage, we are here to help.

Devonport Locals Facebook Group (NZ)
11/07/2024

Devonport Locals Facebook Group (NZ)

03/07/2024

Amid a U.S. boom in betting online, the European companies behind FanDuel and BetMGM are using features in America that they dropped in Britain after acknowledging them as risks to gamblers.

JOIN US TODAY WHEN DR ROBIN HAS AN EXCLUSIVE INTERVIEW WITH BELINDA JANE, MINDSET COACH, JULY 4TH NOON
03/07/2024

JOIN US TODAY WHEN DR ROBIN HAS AN EXCLUSIVE INTERVIEW WITH BELINDA JANE, MINDSET COACH, JULY 4TH NOON

30/06/2024

From Bloomberg
Character.ai lets users design their own generative artificial intelligence chatbots to exchange texts with. Imagine, say, a motivational coach modeled on a favorite video game character. Even if it sounds like a service you’d never use, lots of people are — at least according to the startup’s own numbers.

Last week, Character.ai said it serves about 20,000 queries per second — roughly 20% of the request volume served by Google search. Each query is counted when Character.ai’s bot responds to a message sent by a user. The service is particularly popular on mobile and with younger users, where it rivals usage of OpenAI’s ChatGPT, according to stats from last year.

With these bots, the conversation is often more intimate than asking for coding answers or translation help. Users are exchanging volumes of messages and developing relationships; Character.ai claimed last year that users who send at least one message are on average using the service for 2 hours a day.

On Thursday, the company ratcheted up the potential for emotional attachment by launching voice calls with its AI characters. “It’s like having a phone call with a friend,” the company said.

Though the company’s policy forbids use of the app for “obscene or pornographic” content, some users try to find ways to s*xt with the bots. It’s a pattern that’s been around since the dawn of the internet. On Reddit, Character.ai users trade tips about how to get past content filters to engage in spicier chats, but even for those who keep their interactions PG-13, they still feel invested in these characters.

Emotional attachment to a bot means big, splashy user engagement numbers. But it also comes with risks, as popularized in the movie Her. When the models get adjusted or tweaked, users are more likely to feel upset or personally wounded. In the past few weeks, Character.ai users have been complaining that their bots’ personalities have changed, or that they’re suddenly unable to have conversations like they once did. Last year, users of the chatbot Replika were up in arms when the company suddenly limited their ability to s*xt with their bots.

A Character.ai spokesperson said it didn’t change the bots, but users may have encountered tests for new features.

Feeling attached to a chatbot has ethical repercussions for humans, said Giada Pistilli, principal ethicist at AI startup Hugging Face. Chatbots like Character.ai’s are designed to keep people chatting for long periods by using tactics like prompting questions at the end of a response, she said.

The chatbots’ design can lead to users attributing human-like skills, emotions and feelings to the bots, she said. “One of the ethical concerns is that while users may feel listened to, understood and loved, this emotional attachment can actually exacerbate their isolation,” she said. People may get used to talking to a bot that’s always accommodating and always available, and may turn away from humans who can’t provide that.

“Overly realistic bots’ personalities can blur the line between human and machine, leading to emotional dependency and the potential for manipulation,” Pistilli added.

Top AI companies are exploring how to make their bots funnier. Anthropic said it wants its AI model Claude to be like a pleasant co-worker: “They’re honest, but they can inject a little bit of humor into a conversation with you,” Anthropic co-founder and President Daniela Amodei told my colleague Shirin Ghaffary recently.

AI companies, even when faced with these ethical tradeoffs, may not be able to avoid the allure of hooking users by making their chatbots more human-like. According to a recent report in the Information, Google is looking into making its own version of Character.ai-like entertainment bots. And OpenAI slightly delayed its release of a more fluid voice-powered version of GPT-4, but still says it’ll be available “in the coming weeks.”—Ellen Huet

28/06/2024

From Bloomberg

Warning signs
Last week, US Surgeon General Vivek Murthy said all social media platforms should be required to add warning labels, the way ci******es were decades ago. But would that actually protect kids from the dangers of social media?

Child safety advocates I spoke to don’t think so. “The warning label is fine, but we don’t just stick a warning label on alcohol,” said Tim Estes, the founder and chief executive officer of children’s app Angel AI, and a consultant on the proposed Kids Online Safety Act in the US. “We card them.”

Other advocates cautioned that focusing on warning labels could distract from solving social media’s deeper problems: excessive data collection, addictive video feeds or poor enforcement of age restrictions, for example. Instead of labels, we need legislation, they told me.

Both can be true. There’s at least some evidence that warning labels can influence behavior. Nathanael Fast, a behavioral scientist at the University of Southern California and the co-director of the Psychology of Technology Institute, said warning labels are “probably not effective on their own,” but research has shown they can grab people’s attention and change attitudes. “It’s maybe a little less effective at changing your actual behaviors, but in some cases it does,” he said.

New data from Snap Inc., for example, shows that after the company added a pop-up warning to Snapchat giving teens the option to block strangers who messaged them, it led to 12 million blocks. The photo-sharing app didn’t disclose the number of times the warning popped up in total or the number of people who rejected it (Snapchat has about 800 million users worldwide). But the feature at least protected some teens from potentially harmful interactions.

Snap announced this week that it plans to expand the pop-up warnings to appear on messages from people who have been blocked or reported by others, or who are from a region outside of a teen’s typical network. In some cases, the Snapchat app will automatically block messages and friend requests.

Warning labels of course, aren’t always enough. Earlier this year, the chief executive officers of social media companies Snap, Meta Platforms Inc., TikTok, Discord and X testified before Congress about protecting kids online. During the hearing, Senator Ted Cruz of Texas pointed out an obvious flaw with the warning labels on Meta’s Instagram. When a user searched for child s*x abuse material on the site, a warning label popped up with a link to get resources. But at the bottom, it still allowed the person to see the material anyway.

“Mr. Zuckerberg, what the hell were you thinking?” Cruz asked Meta CEO Mark Zuckerberg. “How many times did that user click on ‘see results’ anyway?” Zuckerberg didn’t have an answer.

Warning signs and pop-up messages clearly don’t remove the need for broader change. But adding one more layer between a child and harms like s*xual abuse, financial scams and body shaming seems worth the effort. At the very least it reminds children and their parents just how dangerous social media can be. We’ve been warned.—Aisha Counts

It’s mid week for a short week. Have a smile. 😃
25/06/2024

It’s mid week for a short week. Have a smile. 😃

Sunday triviaWhere can you find the underwater tree?
23/06/2024

Sunday trivia
Where can you find the underwater tree?

Saturday trivia- who are these wedding singers?
22/06/2024

Saturday trivia- who are these wedding singers?

For those in Wellington or visiting Join us for a special week at Everybody Eats Wellington, LTD, Level 1, 60 Dixon St!W...
14/06/2024

For those in Wellington or visiting Join us for a special week at Everybody Eats Wellington, LTD, Level 1, 60 Dixon St!
We're celebrating our former refugee community by inviting three groups to share their incredible cuisine with our rescued food. It's going to be a week of amazing kai, incredible kōrero, and unique community building.
🗓️ Dates: Monday, Tuesday, and Wednesday, 17th to 19th of June
🕕 Time: 6-8pm
💰 Pay as you can
The week will conclude with a special evening on Thursday, 20th June, hosted by our friends at Voices of Aroha. Join us for stories and poems from former refugees in our wonderful city, Pōneke.
Don't miss out on this unique celebration!

Address

Devonport And Takapuna
Takapuna

Website

http://www.theflea.co.nz/

Alerts

Be the first to know and let us send you an email when The Flea FM posts news and promotions. Your email address will not be used for any other purpose, and you can unsubscribe at any time.

Videos

Share

Category


Other Takapuna media companies

Show All