内容为空 super ace maintenance

 

首页 > 646 jili 777

super ace maintenance

2025-01-15
Nonesuper ace maintenance

Thieves get a taste for cheese and butter amid surging prices

3 exciting ASX 300 shares to buy according to this fund managerPresident-elect Donald Trump has surrounded himself with Silicon Valley entrepreneurs — including Elon Musk, Marc Andreessen, and David Sacks — who are now advising him on technology and other issues. When it comes to AI, this crew of technologists is fairly aligned on the need for rapid development and adoption of AI throughout the U.S. However, there’s one AI safety issue this group brings up quite a bit: the threat of AI “censorship” from Big Tech. Trump’s Silicon Valley advisers could make the responses of AI chatbots a new battleground for conservatives to fight their ongoing culture war with tech companies. AI censorship is a term used to describe how tech companies put their thumb on the scale with their AI chatbots’ answers in order to conform to certain politics, or push their own. Others might call it content moderation, which often refers to the same thing but has a very different connotation. Much like social media and search algorithms, getting AI answers right for live news events and controversial subjects is a constantly moving target. For the last decade, conservatives have repeatedly criticized Big Tech for caving to government pressures and censoring their social media platforms and services. However, some tech executives have begun to moderate their positions in public. For example, ahead of the 2024 election, Meta CEO Mark Zuckerberg apologized to Congress for bending to the Biden administration’s pressure to aggressively moderate content related to COVID-19. Shortly after, the Meta CEO said he’d made a “20-year political mistake” by taking too much responsibility for problems that were out of his company’s control — and said he wouldn’t be making those mistakes again . But according to Trump’s tech advisers, AI chatbots represent an even greater threat to free speech, and potentially a more powerful way to effect control over speech. Instead of skewing a search or feed algorithm toward a desired outcome, such as downranking vaccine disinformation, tech companies can now just give you a single, clear answer that doesn’t include it. In recent months, Musk, Andreessen, and Sacks have spoken out against AI censorship in podcasts, interviews, and social media posts. While we don’t know how exactly they’re advising Trump, their publicly stated beliefs could reveal the conversations they’re having behind closed doors in Washington, D.C., and Mar-a-Lago. “This is my belief, and what I’ve been trying to tell people in Washington, which is if you thought social media censorship was bad, [AI] has the potential to be a thousand times worse,” said a16z co-founder Marc Andreessen in a recent interview with Joe Rogan. “If you wanted to create the ultimate dystopian world, you’d have a world where everything is controlled by an AI that’s been programmed to lie,” said Andreessen in another recent interview with Bari Weiss . Andreessen also disclosed to Weiss that he has spent roughly half his time with Trump’s team since the election happened, offering advice on technology and business. “[Andreessen] explained the dystopian path we were on with AI,” said former PayPal COO and Craft Ventures co-founder, David Sacks, in a recent post on X shortly after he was appointed to be Trump’s AI and crypto czar. “But the timeline split, and we’re on a different path now.” On All In — the popular podcast Sacks hosts alongside other influential venture capitalists — Trump’s new AI adviser has repeatedly criticized Google and OpenAI for, as the show’s hosts describe it, forcing AI chatbots to be politically correct. “One of the concerns about ChatGPT early on was that it was programmed to be woke, and that it wasn’t giving people truthful answers about a lot of things. The censorship was being built into the answers,” said Sacks on an episode of All In from November 2023 . Despite Sacks’ claims, even Elon Musk admits xAI’s chatbot is often more politically correct than he’d like. It’s not because Grok was “programmed to be woke,” but more likely a reality of training AI on the open internet. That said, Sacks is making it more clear every day that “AI truthfulness” is something he’s focused on. Is there a way to score AI models based on how truthful they are? Let’s call it the Galileo Index. Suggestions welcome. https://t.co/fJzwOH3JJa “That’s how you get Black George Washington at Google” The most cited case of AI censorship was when Google Gemini’s AI image generator generated multiracial images for queries such as “U.S. founding fathers” and “German soldiers in WWII,” which were obviously inaccurate. But there are other examples of companies influencing specific results. Most recently, users found out that ChatGPT just won’t answer questions about certain names , and OpenAI admitted that at least one of those names triggered internal privacy tools. At another point, Google’s and Microsoft’s AI chatbots refused to say who won the 2020 U.S. election . During the 2024 election, almost every AI system refused to answer questions about election results, except for Perplexity and Grok . For some of these examples, the tech companies argued they were making a safe and responsible choice for their users. In some cases, that may be true — Grok hallucinated about the outcome of the 2024 election before votes had even been counted . But the Gemini incident stuck out; it caused Google to turn off Gemini’s ability to generate images of people — something the free version of Gemini still cannot do. Google referred to that incident as a mistake and apologized for “missing the mark.” Andreessen and Sacks don’t see it this way. Both venture capitalists have said that Google didn’t miss the mark at all, but rather, hit it a little too obviously. They considered it a pivotal mask-off moment for Google. “The people running Google AI are smuggling in their preferences and their biases, and those biases are extremely liberal,” said Sacks on an episode of All In from February 2024, responding to the Gemini incident. “Do I think they’re going to get rid of the bias? No, they’re going to make it more subtle. That is what I think is disturbing about it.” “It’s 100% intentional; that’s how you get Black George Washington at Google,” said Andreessen in the recent interview with Weiss , rehashing the Gemini incident. “This goes directly to Elon’s argument, which is that at the core of this, you have to train the AI to lie [i.e., to produce answers like Gemini’s].” As Andreessen mentions, Elon Musk has been outspoken against “woke AI chatbots.” Musk originally created his well-funded AI startup, xAI, in 2023 to oppose OpenAI’s ChatGPT, which the billionaire said at the time was infected with the “woke mind virus.” He ultimately created Grok, an AI chatbot with notably fewer safeguards than other leading chatbots. “I’m going to start something which you call TruthGPT or a maximum truth-seeking AI that tries to understand the nature of the universe,” said Musk in an interview with Fox from 2023. When Musk launched Grok, Sacks applauded the effort: “Having something like Grok around will — at a minimum — keep OpenAI honest and keep ChatGPT honest,” said Trump’s AI czar in an All In episode from November 2023 . Now, Musk is doing more than just keeping ChatGPT honest. He has raised more than $12 billion to fund xAI and compete with OpenAI. He’s also suing Sam Altman’s startup and Microsoft, potentially halting OpenAI’s for-profit transition. Musk’s influence on conservative government officials has already shown to carry weight in other areas. Texas attorney general Ken Paxton is investigating a group of advertisers that allegedly boycotted Elon Musk’s X. Musk previously sued the same advertising group, and since then, some of the companies have resumed advertising on his platform. It’s not clear what Trump and other Republicans could do if they actually wanted to investigate OpenAI or Google for AI censorship. It could be investigations by expert agencies, legal challenges, or perhaps just a cultural issue that Trump can press on for the next four years. Regardless of the path forward, Trump’s Silicon Valley advisers are not mincing words on this issue today. “Elon, with the Twitter files, did a privatized version of what now needs to happen broadly,” said Andreessen to Weiss , referring to Musk’s allegations of censorship at Twitter . “We, the American population, need to find out what’s been happening all this time, specifically about this intertwining of government pressure with censorship ... There needs to be consequences.”

NoneThe artificial intelligence video generator, Sora, by OpenAI, is now made available in the U.S. – Open for anyone in the country to produce video content through text prompts. This news comes this Monday, which marks one of the many steps being taken by the company for further expansion in generative AI technologies. Sora, which was first made available by OpenAI in February, had previously been accessible only to a limited group of artists, filmmakers, and safety testers. But as of Monday, OpenAI has thrown open the platform to the public at large, albeit with some technical glitches. The users faced a lot of hassle signing up for the service throughout the day as the company’s website was not able to take on new users at times due to heavy traffic. Sora functions as a text-to-video generator, enabling users to create video clips from written descriptions. One example shared on OpenAI’s website shows how a simple prompt—”a wide, serene shot of a family of woolly mammoths in an open desert”—can result in a video featuring three woolly mammoths slowly walking across sand dunes. The tool allows for a wide range of creative possibilities, offering users the chance to explore video storytelling in new, innovative ways. “We hope this early version of Sora will enable people everywhere to explore new forms of creativity, tell their stories, and push the boundaries of what’s possible with video storytelling,” OpenAI wrote in a blog post. OpenAI’s Expanding AI Portfolio OpenAI , which is probably best known for its ubiquitous chatbot ChatGPT, has been actively expanding its portfolio of AI technologies. In addition to Sora, the company has developed a voice-cloning tool and has also integrated an image-generation tool called DALL-E into the features of ChatGPT . Leveraged by Microsoft, the company has rapidly emerged to become a leader in generative AI, and it has seen its valuation explode to nearly $160 billion. One of the newest creations from OpenAI is Sora, which has furthered its innovation on applications of artificial intelligence. Yet, public release came with scrutiny regarding the development and implications of generative AI. Before its public release, OpenAI opened it up for testing by select individuals such as tech reviewer Marques Brownlee. Brownlee’s review was also mixed, saying the results “are horrifying and inspiring all at once.” He believed that Sora did exceptionally well in generating landscapes as well as stylistic effects but admitted that the software failed to depict basic principles of physics and often caused unrealistic results. Some film directors who previewed the software also reported encountering visual defects while using it, which had them question its readiness to be used by the world. OpenAI also experienced difficulties in terms of meeting the regulatory standards, especially about the UK’s Online Safety Act and the EU’s Digital Services Act and General Data Protection Regulation, also known as GDPR. This regulatory issue is the consequence of ongoing debates on whether AI-generated content is ethical or unlawful. The AI Art Scandal OpenAI had come under controversy, wherein a group of artists criticized the firm for “art washing” its product. The group, self-named as the “Sora PR Puppets,” criticized the firm for making use of the creativity of artists in generating a good narrative of the AI tool at hand but threatening the survival of human creators. This was the case where an artist made a backdoor to obtain unauthorized access into the tool. Because of this incident, the company temporarily suspended the access of the tool. Generative AI has been a subject of critique as regards undermining traditional forms of art and expression. Most notably in the field of images and videos, such AI is said to perpetrate plagiarism and theft of human creative works. In terms of AI image and video generation, tools such as Sora, despite making good strides in this technology, still often experience “hallucinations,” incorrect or distorted output, among other errors, which defeats their purpose of reliability. Threat Of Deepfakes And Misinformation This leaves misuse as one of the big concerns about Sora and similar AI technology. Deepfakes might be misused to make disinformation or deepfake content for misleading the public. This, for instance, is evident in the manner that already some deepfakes were deployed in spreading false videos about Ukrainian President Volodymyr Zelenskyy calling for a ceasefire and videos claiming that the U.S. Vice President Kamala Harris made scandalous comments over diversity. With the increasing sophistication of AI-generated media, the risks associated with its use have never been more significant. As Sora and similar tools gain popularity, the need for stronger regulations and safeguards to prevent misuse becomes even more urgent. ALSO READ | Nancy Mace Faces Backlash As Old Drinking Game Video Surfaces Amid Transgender Debate

Thieves get a taste for cheese and butter amid surging prices

PETAN Proposes Special $15m Funding for Members’ Projects Outside Nigeria"HDBs" of coral fragments, or nubbins, each attached to a specially designed frame to maximise the number of corals that can be grown in the tanks. The nubbins of the staghorn coral at St John’s Island coral culture facility on Dec 10. Coral nubbins attached to a specially designed frame in one of the six specialised tanks in the coral culture facility at St John's Island. The tanks will be paired with a smart system that will send data on water quality to researchers. SINGAPORE – The Republic has launched its most ambitious coral-restoration project, growing corals from fragments in “high-rise” special tanks on St John’s Island. Once grown to a healthy size, 100,000 of these corals will be planted on degraded reefs or empty sea spaces to create new reef habitats. The first step of this decade-long effort began at a new facility in the island’s Marine Park Outreach and Education Centre – home to six specialised tanks that can be used for large-scale coral cultivation. The six tanks can hold up to 3,600 coral fragments, or nubbins, at any one time. To date, more than $2 million has been raised for the restoration project. The facility is still in the works and is targeted to fully open in the second quarter of 2025. For now, there are about 600 nubbins growing in two of the tanks. While the initial stages of the project will be helmed by researchers, marine enthusiasts will be later invited to the lab to grow corals and monitor them, said National Development Minister Desmond Lee on Dec 10, as he announced the launch of the initiative on St John’s Island. The National Parks Board (NParks), St John’s Island National Marine Laboratory and the Friends of Marine Park community will train members of the public to cultivate corals, monitor their growth and do weeding work to remove algae from the corals, among other things. More details on public participation will be shared when ready. When the coral-restoration project was announced in 2023, NParks said it would take at least 10 years to complete. Once grown to a healthy size, 100,000 of these corals will be planted on degraded reefs or empty sea spaces to create new reef habitats. ST PHOTO: KEVIN LIM At the launch, Mr Lee was joined by world-renowned British primate expert Jane Goodall, who was on a working visit to Singapore. Over the decades, about 60 per cent of Singapore’s coral reefs have been lost to coastal development and land reclamation. Most of its remaining intact coral reefs are found in the Southern Islands. The Republic’s waters are home to around 250 species of hard corals, which constitute about a third of the world’s existing coral species. The reefs here serve as habitat for more than 100 species of reef fish, about 200 species of sea sponges, and rare and endangered seahorses and clams, among other marine life. The six tanks can hold up to 3,600 coral fragments, or nubbins, at any one time. ST PHOTO: KEVIN LIM Beyond boosting marine biodiversity, restoring corals will protect coastlines from waves and storms, which are expected to get stronger amid sea-level rise and climate change. The corals to be grown in tanks and planted in the wild include several species under NParks’ species recovery programme, which protects threatened flora and fauna and helps them survive environmental change. These include the branching staghorn coral and the flat table acropora coral. The acropora is not a common species here because it thrives in waters with strong currents and good visibility – conditions that are rarely found in Singapore. The acropora species is not common in local waters. photo: NParks Coral nubbins are fragments trimmed from a colony of adult corals. However, marine biologists usually prioritise loose corals that would otherwise tumble and die when swept by waves. Mr Lee said cultivating corals in specialised tanks is an ambitious undertaking, with conditions such as lighting and temperature as well as water quality and flow needing to be specific to each species. To allow hundreds of coral fragments to grow in each tank, scientists at the St John’s Island National Marine Laboratory are cultivating them on vertical structures, among other methods. Coral nubbins are attached to plugs that are then affixed to a vertical frame. The scientists and NParks staff have named these set-ups “coral HDBs”, said the minister. Small coral nubbins are fragmented from the adult colony. PHOTO: NPARKS Dr Lionel Ng, a research fellow at the NUS Tropical Marine Science Institute who is involved in the coral-restoration work, noted that the survival rate of transplanted corals is about 80 per cent to 90 per cent, which is on a par with the 80 per cent survival rate of corals found in the wild here. The tanks are paired with a smart system that will send data on water quality to researchers. This allows them to monitor tank conditions remotely and be alerted if they need to intervene. The system is a technology of Delta Electronics, a firm that specialises in industrial and building automation solutions. Delta is also one of several donors of the more than $2 million raised so far. The other donors include GSK-EDB Trust Fund, Deutsche Bank, Takashimaya Singapore and marine fuel firm KPI OceanConnect. A smart coral culture aquaculture system set up by Delta Electronics and installed in the culture tanks at St John’s Island’s coral culture facility on Dec 10. ST PHOTO: KEVIN LIM The launch of the restoration effort comes as existing corals are slowly recovering from the largest recorded global bleaching event caused by a marine heatwave. Announced in mid-April by the US National Oceanic and Atmospheric Administration, the global bleaching event was the fourth of its kind. In July, areas such as St John’s, Lazarus and Kusu Islands were found to have 30 per cent to 55 per cent of coral colonies bleached and white. With water temperatures dropping in recent months, bleached corals have started to regain colour, said Mr Lee. NParks and NUS have been monitoring Singapore’s reefs for bleaching since July. The findings will help identify which species are under threat and which ones are climate-resilient, and will also narrow down suitable planting sites for future coral-restoration efforts, said Mr Lee. On whether restored corals will be able to survive future marine heatwaves, Dr Ng pointed to a research project that aims to enhance the ecological resilience of coral reefs against climate change. “Information from that (study) will feed into this. We’ll refine our final strategies to see which species are suitable for which areas. It’s a matter of tweaking what we know of the environment and what we know of the corals to find the best match,” he added. In her address to NParks, scientists and groups involved in the restoration project, Dr Goodall said: “We know that oceans and forests are the two great capturers of carbon dioxide (CO2) on the planet... In the oceans, we have the kelp forests and the seagrass which absorb as much CO2 as a small inland forest. “It’s no good just protecting corals if we don’t protect kelp forests and seagrass, if we don’t protect forests and peatlands. It’s all interconnected.” Join ST's WhatsApp Channel and get the latest news and must-reads. Read 3 articles and stand to win rewards Spin the wheel nowNoneThe Best Show on Right Now Is One You May Not Hear People Talking About

Previous:
Next: super ace mega win