RIO DE JANEIRO, Brazil — Brazil's federal police say the former right-wing president, Jair Bolsonaro, attempted to launch a coup in 2022 to stay in office following his relection defeat. The police indicted 36 other people, as part of what they say was a criminal conspiracy working to keep Bolsonaro in power, after he lost the 2022 election to President Luiz Inácio Lula da Silva. Among the dozens allegedly part of the conspiracy are Bolsonaro's former defense minister, who was also his vice-presidential running mate, and a number of former close aides. The Federal Police report called the coup an attempt to "violently dismantle the constitutional state". The nearly 900-page report now goes to Brazil's Supreme Court to be referred to the attorney general who will decide whether to go ahead and try the former president. Shortly after Bolsonaro's left wing rival took office in 2023, on January 8th, thousands of Bolsonaro supporters stormed the presidential palace, the Supreme Court and congress in the capital Brasilia. Former Bolsonaro administration officials also accused of involvement in the alleged plot, include Defense Minister Walter Braga Netto, ex-National Security Adviser Augusto Heleno, the head of Bolsonaro's party, Valdemar Costa Neto and the former Justice Minister Anderson Torres. On Tuesday, officials arrested four members of the military, including a top aide to Bolsonaro who they said colluded to assassinate then President-elect Luiz Inácio Lula da Silva, his vice-presidential pick and a Supreme Court Justice. The plan was to spark a federal emergency that would allow Bolsonaro to declare a "state of siege" and stay in power as a caretaker government. If convicted of attempting a coup and criminal association, the former president could face years in prison. Bolsonaro has denied all charges and says he is being politically persecuted. This is a developing story and will be updated Copyright 2024 NPRSaudi Arabia confirmed as host of 2034 World Cup despite human rights concernsF1 closer to approving expanded grid for GM entry
C3.ai ( AI 8.25% ) stock is soaring in Thursday's trading. The company's share price was up 9.7% as of 2:30 p.m. ET. Meanwhile, the S&P 500 index was up roughly 0.5%. C3.ai stock is seeing big gains on the heels of Nvidia 's third-quarter report yesterday. Nvidia is the leading provider of the graphics processing units (GPUs) that are powering the artificial intelligence ( AI ) revolution, and its performance is often viewed as a bellwether for the broader AI industry. Nvidia's Q3 report is lifting C3.ai stock After the market closed yesterday, Nvidia published results for the third quarter of its 2025 fiscal year, which ended Oct. 29. The AI leader posted sales and earnings performance for the quarter that beat Wall Street's expectations, and it also issued forward guidance that came in better than anticipated. Nvidia posted non-GAAP (adjusted) earnings per share of $0.81 on sales of $35.08 billion, topping the average analyst estimate's call for adjusted earnings of $0.75 on sales of $33.16 billion. The company's revenue was up 94% year over year, and adjusted earnings per share were up 103%. Meanwhile, the average analyst estimate had called for the business to report adjusted earnings per share of $0.75 on revenue of $33.16 billion. Nvidia expects revenue of roughly $37.5 billion for the current quarter. If the business were to hit that target, it would mean delivering annual sales growth of roughly 70%. While the company's sales growth is decelerating, the overall demand outlook for the AI space is very strong. That bodes well for C3.ai and other players, and investors are responding by bidding up the company's stock today. What's next for C3.ai? With its last quarterly report, C3.ai's revenue increased 21% year over year to $87.2 million. Meanwhile the business posted an adjusted loss per share of $0.05. Sales growth actually looks poised to accelerate in the near term. For the current quarter, the company is guiding for sales to come in between $88.6 million and $93.6 million -- good for growth of roughly 24.5% at the midpoint. Meanwhile, full-year sales are projected to come in between $370 million and $395 million. If the business were to hit the midpoint of that guidance range, it would mean delivering sales growth of approximately 23%. Along with some encouraging forward sales guidance, C3.ai has also been scoring some promising partnerships recently. The company recently announced that it's forged a new partnership with Microsoft to accelerate the adoption of enterprise AI applications, and it published a press release yesterday detailing a partnership with Capgemini targeting AI solutions for industries including life sciences, energy, utilities, government, banking, and manufacturing.Bad Axe: Wisconsin wary of rival Minnesota with bowl bid in peril
Blowout loss to Packers leaves the 49ers on the playoff brinkhilosopher Shannon Vallor and I are in the British Library in London, home to 170 million items—books, recordings, newspapers, manuscripts, maps. In other words, we’re talking in the kind of place where today’s artificial intelligence chatbots like ChatGPT come to feed. Sitting on the library’s café balcony, we are literally in the shadow of the Crick Institute, the biomedical research hub where the innermost mechanisms of the human body are studied. If we were to throw a stone from here across St. Pancras railway station, we might hit the London headquarters of Google, the company for which Vallor worked as an AI ethicist before moving to Scotland to head the Center for Technomoral Futures at the University of Edinburgh. Here, wedged between the mysteries of the human, the embedded cognitive riches of human language, and the brash swagger of commercial AI, Vallor is helping me make sense of it all. Will AI solve all our problems, or will it make us obsolete, perhaps to the point of extinction? Both possibilities have engendered hyperventilating headlines. Vallor has little time for either. She acknowledges the tremendous potential of AI to be both beneficial and destructive, but she thinks the real danger lies elsewhere. As she explains in her 2024 book , both the starry-eyed notion that AI thinks like us and the paranoid fantasy that it will manifest as a malevolent dictator, assert a fictitious kinship with humans at the cost of creating a naïve and toxic view of how our own minds work. It’s a view that could encourage us to relinquish our agency and forego our wisdom in deference to the machines. It’s easy to assert kinship between machines and humans when humans are seen as mindless machines. Reading I was struck by Vallor’s determination to probe more deeply than the usual litany of concerns about AI: privacy, misinformation, and so forth. Her book is really a discourse on the relation of human and machine, raising the alarm on how the tech industry propagates a debased version of what we are, one that reimagines the human in the guise of a soft, wet computer. If that sounds dour, Vallor most certainly isn’t. She wears lightly the deep insight gained from seeing the industry from the inside, coupled to a grounding in the philosophy of science and technology. She is no crusader against the commerce of AI, speaking warmly of her time at Google while laughing at some of the absurdities of Silicon Valley. But the moral and intellectual clarity and integrity she brings to the issues could hardly offer a greater contrast to the superficial, callow swagger typical of the proverbial tech bros. “We’re at a moment in history when we need to rebuild our confidence in the capabilities of humans to reason wisely, to make collective decisions,” Vallor tells me. “We’re not going to deal with the climate emergency or the fracturing of the foundations of democracy unless we can reassert a confidence in human thinking and judgment. And everything in the AI world is working against that.” To understand AI algorithms, Vallor argues we should not regard them as minds. “We’ve been trained over a century by science fiction and cultural visions of AI to expect that when it arrives, it’s going to be a machine mind,” she tells me. “But what we have is something quite different in nature, structure, and function.” Rather, we should imagine AI as a mirror, which doesn’t duplicate the thing it reflects. “When you go into the bathroom to brush your teeth, you know there isn’t a second face looking back at you,” Vallor says. “That’s just a reflection of a face, and it has very different properties. It doesn’t have warmth; it doesn’t have depth.” Similarly, a reflection of a mind is not a mind. AI chatbots and image generators based on large language models are of human performance. “With ChatGPT, the output you see is a reflection of human intelligence, our creative preferences, our coding expertise, our voices—whatever we put in.” Even experts, Vallor says, get fooled inside this hall of mirrors. Geoffrey Hinton, the computer scientist who shared this year’s Nobel Prize in physics for his pioneering work in developing the deep-learning techniques that made LLMs possible, at an AI conference in 2024 that “we understand language in much the same way as these large language models.” Hinton is convinced these forms of AI don’t just blindly regurgitate text in patterns that seem meaningful to us; they develop some sense of the meaning of words and concepts themselves. An LLM is trained by allowing it to adjust the connections in its neural network until it reliably gives good answers, a process that Hinton to “parenting for a supernaturally precocious child.” But because AI can “know” vastly more than we can, and “thinks” much faster, Hinton concludes that it might ultimately supplant us: “It’s quite conceivable that humanity is just a passing phase in the evolution of intelligence,” at a 2023 MIT Technology Review conference. “Hinton is so far out over his skis when he starts talking about knowledge and experience,” Vallor says. “We know that the are only superficially analogous in their structure and function. In terms of what’s happening at the physical level, there’s a gulf of difference that we have every reason to think makes a difference.” There’s no real kinship at all. I agree that apocalyptic claims have been given far too much airtime, I say to Vallor. But some researchers say LLMs are getting more “cognitive”: OpenAI’s latest chatbot, model o1, is said to work via a series of chain-of-reason steps (even though the company won’t disclose them, so we can’t know if they resemble human reasoning). And AI surely does have features that can be considered aspects of mind, such as memory and learning. Computer scientist Melanie Mitchell and complexity theorist r have that, while we shouldn’t regard these systems as minds like ours, they might be considered minds of a quite different, unfamiliar variety. “I’m quite skeptical about that approach. It might be appropriate in the future, and I’m not opposed in principle to the idea that we might build machine minds. I just don’t think that’s what we’re doing right now.” Vallor’s resistance to the idea of stems from her background in philosophy, where mindedness tends to be rooted in experience: precisely what today’s AI does not have. As a result, she says, it isn’t appropriate to speak of these machines as thinking. Her view collides with the 1950 paper by British mathematician and computer pioneer Alan Turing, “Computing machinery and Intelligence,” often regarded as the conceptual foundation of AI. Turing asked the question: “Can machines think?”—only to replace it with what he considered to be a better question, which was whether we might develop machines that could give responses to questions we’d be unable to distinguish from those of humans. This was Turing’s “ ,” now commonly known as the Turing test. But imitation is all it is, Vallor says. “For me, thinking is a specific and rather unique set of experiences we have. Thinking without experience is like water without the hydrogen—you’ve taken something out that loses its identity.” Reasoning requires concepts, Vallor says, and LLMs don’t those. “Whatever we’re calling concepts in an LLM are actually something different. It’s a statistical mapping of associations in a high-dimensional mathematical vector space. Through this representation, the model can get a line of sight to the solution that is more efficient than a random search. But that’s not how we think.” They are, however, very good at . “We can ask the model, ‘How did you come to that conclusion?’ and it just bullshits a whole chain of thought that, if you press on it, will collapse into nonsense very quickly. That tells you that it wasn’t a train of thought that the machine followed and is committed to. It’s just another probabilistic distribution of reason-like shapes that are appropriately matched with the output that it generated. It’s entirely post hoc.” The pitfall of insisting on a fictitious kinship between the human mind and the machine can be discerned since the earliest days of AI in the 1950s. And here’s what worries me most about it, I tell Vallor. It’s not so much because the capabilities of the AI systems are being overestimated in the comparison, but because the way the human brain works is being so diminished by it. “That’s my biggest concern,” she agrees. Every time she gives a talk pointing out that AI algorithms are not really minds, Vallor says, “I’ll have someone in the audience come up to me and say, ‘Well, you’re right but only because at the end of the day our minds aren’t doing these things either—we’re not really rational, we’re not really responsible for what we believe, we’re just predictive machines spitting out the words that people expect, we’re just matching patterns, we’re just doing what an LLM is doing.’” Hinton has suggested an LLM can have feelings. “Maybe not exactly as we do but in a slightly different sense,” Vallor says. “And then you realize he’s only done that by stripping the concept of emotion from anything that is humanly experienced and turning it into a behaviorist reaction. It’s taking the most reductive 20th-century theories of the human mind as baseline truth. From there it becomes very easy to assert kinship between machines and humans because you’ve already turned the human into a mindless machine.” It’s with the much-vaunted notion of artificial general intelligence (AGI) that these problems start to become acute. AGI is often defined as a machine intelligence that can perform any intelligent function that humans can, but better. Some believe we are already on that threshold. Except that, to make such claims, we must redefine human intelligence as a subset of what we do. “Yes, and that’s a very deliberate strategy to draw attention away from the fact that we haven’t made AGI and we’re nowhere near it,” Vallor says. Silicon Valley culture has the features of religion. It’s unshakeable by counterevidence or argument. Originally, AGI meant something that misses nothing of what a human mind could do—something about which we’d have no doubt that it is thinking and understanding the world. But in , Vallor explains that experts such as Hinton and Sam Altman, CEO of OpenAI, the company that created ChatGPT, now define AGI as a system that is equal to or better than humans at calculation, prediction, modeling, production, and problem-solving. “In effect,” Vallor says, Altman “moved the goalposts and said that what we mean by AGI is a machine that can in effect do all of the economically valuable tasks that humans do.” It’s a common view in the community. Mustafa Suleyman, CEO of Microsoft AI, has written the ultimate objective of AI is to “distill the essence of what makes us humans so productive and capable into software, into an algorithm,” which he considers equivalent to being able to “replicate the very thing that makes us unique as a species, our intelligence.” When she saw Altman’s reframing of AGI, Vallor says, “I had to shut the laptop and stare into space for half an hour. Now all we have for the target of AGI is something that your boss can replace you with. It can be as mindless as a toaster, as long as it can do your work. And that’s what LLMs are—they are mindless toasters that do a lot of cognitive labor without thinking.” I probe this point with Vallor. After all, having AIs that can beat us at chess is one thing—but now we have algorithms that write convincing prose, have engaging chats, make music that fools some into thinking it was made by humans. Sure, these systems can be rather limited and bland—but aren’t they encroaching ever more on tasks we might view as uniquely human? “That’s where the mirror metaphor becomes helpful,” she says. “A mirror image can dance. A good enough mirror can show you the aspects of yourself that are deeply human, but not the inner experience of them—just the performance.” With AI art, she adds, “The important thing is to realize there’s nothing on the other side participating in this communication.” What confuses us is we can feel emotions in response to an AI-generated “work of art.” But this isn’t surprising because the machine is reflecting back permutations of the patterns that humans have made: Chopin-like music, Shakespeare-like prose. And the emotional response isn’t somehow encoded in the stimulus but is constructed in our own minds: Engagement with art is far less passive than we tend to imagine. But it’s not just about art. “We are meaning-makers and meaning-inventors, and that’s partly what gives us our personal, creative, political freedoms,” Vallor says. “We’re not locked into the patterns we’ve ingested but can rearrange them in new shapes. We do that when we assert new moral claims in the world. But these machines just recirculate the same patterns and shapes with slight statistical variations. They do not have the capacity to make meaning. That’s fundamentally the gulf that prevents us being justified in claiming real kinship with them.” I ask Vallor whether some of these misconceptions and misdirection about AI are rooted in the nature of the tech community itself—in its narrowness of training and culture, its lack of diversity. She sighs. “Having lived in the San Francisco Bay Area for most of my life and having worked in tech, I can tell you the influence of that culture is profound, and it’s not just a particular cultural outlook, . There are certain commitments in that way of thinking that are unshakeable by any kind of counterevidence or argument.” In fact, providing counterevidence just gets you excluded from the conversation, Vallor says. “It’s a very narrow conception of what intelligence is, driven by a very narrow profile of values where efficiency and a kind of winner-takes-all domination are the highest values of any intelligent creature to pursue.” But this efficiency, Vallor continues, “is never defined with any reference to any higher value, which always slays me. Because I could be the most efficient at burning down every house on the planet, and no one would say, ‘Yay Shannon, you are the most efficient pyromaniac we have ever seen! Good on you!’” People really think the sun is setting on human decision-making. That’s terrifying to me. In Silicon Valley, efficiency is an end in itself. “It’s about achieving a situation where the problem is solved and there’s no more friction, no more ambiguity, nothing left unsaid or undone, you’ve dominated the problem and it’s gone and all there is left is your perfect shining solution. It is this ideology of intelligence as a thing that wants to remove the business of thinking.” Vallor tells me she once tried to explain to an AGI leader that there’s no mathematical solution to the problem of justice. “I told him the nature of justice is we have conflicting values and interests that cannot be made commensurable on a single scale, and that the work of human deliberation and negotiation and appeal is essential. And he told me, ‘I think that just means you’re bad at math.’ What do you say to that? It becomes two worldviews that don’t intersect. You’re speaking to two very different conceptions of reality.” Vallor doesn’t underestimate the threats that ever-more powerful AI presents to our societies, from our privacy to misinformation and political stability. But her real worry right now is what AI is doing to our notion of ourselves. “I think AI is posing a fairly imminent threat to the existential significance of human life,” Vallor says. “Through its automation of our thinking practices, and through the narrative that’s being created around it, AI is undermining our sense of ourselves as responsible and free intelligences in the world. You can find that in authoritarian rhetoric that wishes to justify the deprivation of humans to govern themselves. That story has had new life breathed into it by AI.” Worse, she says, this narrative is presented as an objective, neutral, politically detached story: It’s just . “You get these people who really think that the time of human agency has ended, the sun is setting on human decision-making—and that that’s a good thing and is simply scientific fact. That’s terrifying to me. We’re told that what’s next is that AGI is going to build something better. And I do think you have very cynical people who believe this is true and are taking a kind of religious comfort in the belief that they are shepherding into existence our machine successors.” Vallor doesn’t want AI to come to a halt. She says it really could help to solve some of the serious problems we face. “There are still huge applications of AI in medicine, in the energy sector, in agriculture. I want it to continue to advance in ways that are wisely selected and steered and governed.” That’s why a backlash against it, however understandable, could be a problem in the long run. “I see lots of people turning against AI,” Vallor says. “It’s becoming a powerful hatred in many creative circles. Those communities were much more balanced in their attitudes about three years ago, when LLMs and image models started coming out. There were a lot of people saying, ‘This is kind of cool.’ But the approach by the AI industry to the rights and agency of creators has been so exploitative that you now see creatives saying, ‘Fuck AI and everyone attached to it, don’t let it anywhere near our creative work.’ I worry about this reactive attitude to the most harmful forms of AI spreading to a general distrust of it as a path to solving any kind of problem.” While Vallor still wants to promote AI, “I find myself very often in the camp of the people who are turning angrily against it for reasons that are entirely legitimate,” she says. That divide, she admits, becomes part of an “artificial separation people often cling to between humanity and technology.” Such a distinction, she says, “is potentially quite damaging, because technology is fundamental to our identity. We’ve been technological creatures since before we were . Tools have been instruments of our liberation, of creation, of better ways of caring for one another and other life on this planet, and I don’t want to let that go, to enforce this artificial divide of humanity versus the machines. Technology at its core can be as humane an activity as anything can be. We’ve just lost that connection.” Posted on Philip Ball is a freelance writer based in London, and the author of many books on science and its interactions with the broader culture. His latest book is . Cutting-edge science, unraveled by the very brightest living thinkers.Ever wanted to bring your dog with you aboard a cruise ship? Do you have a business focused on dogs and their families? If you answered yes to either question, you’ll be excited to learn that what’s being called the first-ever dog-friendly cruise is being planned aboard Margaritaville at Sea’s Islander out of the Port of Tampa in November 2025. And business opportunities await. Cruise ships famously don’t allow dogs other than service animals. Organizers of this cruise anticipate selecting from a long line of hopefuls. A “waitlist for all dog parents who have dreamt of bringing their furry friends along for their vacations will open soon,” a news release says. Organizers are calling for 250 dogs, “their owners and their closest humans” to become “inaugural ambassadors” for the cruise, which they promise will offer “unique experiences and activities including gifts and samples from top vendors, dog shows and trainings, guest speakers, costume contests, parades, and more.” The event is being staged by two organizations — Cruise Tails and Expedia Cruises of West Orlando. The website cruisetails.com seeks sponsors and investors in hopes of turning the cruise into a recurring event. Sponsorship and partnership opportunities are available for companies seeking brand visibility “across a passionate pet-loving audience,” the site says. And participants must sign photo waivers, the website says, adding, “We anticipate the fun will be all over social media and even in the press. In fact, the 250 chosen will undoubtedly be asked by sponsors to try products and post about them.” Cruise Tails was formed by Steve Matzke, a Bradenton-based entrepreneur listed on LinkedIn as beginning his career this month as an “independent consultant.” Matzke spent four years prior to that as senior director of external relations for the American Accounting Association, and 12 years before that as director of faculty and university initiatives for the American Institute of CPAs, his LinkedIn profile shows. Expedia Cruises of West Orlando was founded in 2019 by Dawn von Graff, an avid traveler who has taken more than 75 cruises and visited more than 80 countries, and her husband. She owned a computer networking firm, worked as an international tour manager, and was a top salesperson for Marriott before forming Expedia Cruises of West Orlando as a full-service travel agency. Details including dates, prices and itineraries have not yet been released. According to the website, organizers hope to select the inaugural 250 dogs based partly on how the dogs perform in a “video talent singing contest” as well as “a variety of criteria” to be announced “over the next few weeks.” The bigger the dog’s entourage, the better chance it will have to be chosen, the website says. “Preference will be given to dogs in a group which includes one dog cabin traveling with two or more associated cabins of friends or family without dogs,” it says. A spokeswoman for Margaritaville at Sea says the organizers are chartering the Islander, and the cruise will not be available for booking to the general public. Each dog will have “private relief stations” on their cabin balconies, and when dogs don’t make it to the relief station, each will have its own “pet butler” to ensure “their cabin and the boat remain in top condition,” a Cruise Tails spokeswoman said. Participants must agree to follow protocols on board, including keeping their dogs in permitted areas and making sure they are up to date with appropriate vaccinations. Dogs will not be allowed in dining areas, the ship’s casino, pool decks, lounges or music venues, according to the news release. Organizers will also be looking for workers and vendors. “We’re going to need dog walkers, pet butlers, and so much more,” the website says. And “if you have a proven skill like pet massage, grooming and pet walking or if you make custom dog costumes, have a unique dog product you would like to promote or are a well-known dog expert, we would love to chat with you.” Calls for pet handlers and vendors will be posted “in the next few months,” the site says. Whether the event turns into the profitable industry that its organizers hope for will undoubtedly depend on how the first one unfolds. A spokeswoman did not immediately have answers to such questions of what will happen to dogs that get aggressive with humans or other dogs? Will owners be required to purchase additional insurance to cover any possibilities? Will food be provided and how will feedings be handled? Contributors on Reddit.com posted mixed reactions to the announcement on Monday. “Cruises are already floating petri dishes. This doesn’t seem like a very good idea,” said one. “Now all decks are poop decks,” said another. A couple of posters worried about dogs going overboard. One said, “sounds awesome if you like dogs,” while another chimed in, “Better than a gorilla-friendly cruise, I suppose.” Ron Hurtibise covers business and consumer issues for the South Florida Sun Sentinel. He can be reached by phone at 954-356-4071, on Twitter @ronhurtibise or by email at rhurtibise@sunsentinel.com.
{ "@context": "https://schema.org", "@type": "NewsArticle", "dateCreated": "2024-11-27T21:06:06+02:00", "datePublished": "2024-11-27T21:06:06+02:00", "dateModified": "2024-11-27T21:06:04+02:00", "url": "https://www.newtimes.co.rw/article/22160/opinions/a-call-for-ethical-practice-to-save-our-nascent-insurance-sector", "headline": "A call for ethical practice to save our nascent insurance sector", "description": "The recent allegations of collusion between some lawyers and insurance companies or brokers to defraud road accident victims are deeply...", "keywords": "", "inLanguage": "en", "mainEntityOfPage":{ "@type": "WebPage", "@id": "https://www.newtimes.co.rw/article/22160/opinions/a-call-for-ethical-practice-to-save-our-nascent-insurance-sector" }, "thumbnailUrl": "https://www.newtimes.co.rw/thenewtimes/uploads/images/2024/11/27/64956.jpg", "image": { "@type": "ImageObject", "url": "https://www.newtimes.co.rw/thenewtimes/uploads/images/2024/11/27/64956.jpg" }, "articleBody": "The recent allegations of collusion between some lawyers and insurance companies or brokers to defraud road accident victims are deeply concerning. According to testimonies from different victims, such underhand methods have led them receive so little that they were meant to receive, which affects them in many ways, including depriving them of the means to get the deserving medical attention. Such unethical behavior not only undermines the trust of the public in the insurance industry but also perpetuates injustice. Insurance is a crucial financial tool that provides security and peace of mind. However, when unscrupulous individuals exploit the system for personal gain, it erodes the very foundation of trust that underpins the industry. As of October this year, insurance penetration in Rwanda stood at 2.1 per cent, which is way lower than the global average of 7 per cent and everything must be done to ensure more Rwandans join, a key catalyst for economic development. Victims of accidents, who are often vulnerable and in need of support, should not be subjected to further hardship and exploitation. To address this issue, it is imperative that the Association of Insurers of Rwanda, the Rwanda Bar Association, and the National Bank of Rwanda which is the industry regulator, work together to ensure ethical practices are upheld across board. The first step is that the bar, which has oversight over legal practitioners has admitted to the existence of such disingenuous characters within their ranks. Let the insurers also step up. Stricter regulations, transparent procedures, and robust oversight mechanisms must be implemented to prevent such abuses. Furthermore, it is essential to raise awareness about consumer rights and provide victims with information on how to seek legal redress. By empowering individuals to protect their interests, we can discourage unethical behavior and create a more equitable insurance landscape. The insurance industry has a vital role to play in the economic and social development of our nation. By upholding the highest standards of integrity and professionalism, we can build a stronger, more resilient, and trustworthy sector.", "author": { "@type": "Person", "name": "The New Times" }, "publisher": { "@type": "Organization", "name": "The New Times", "url": "https://www.newtimes.co.rw/", "sameAs": ["https://www.facebook.com/TheNewTimesRwanda/","https://twitter.com/NewTimesRwanda","https://www.youtube.com/channel/UCuZbZj6DF9zWXpdZVceDZkg"], "logo": { "@type": "ImageObject", "url": "/theme_newtimes/images/logo.png", "width": 270, "height": 57 } }, "copyrightHolder": { "@type": "Organization", "name": "The New Times", "url": "https://www.newtimes.co.rw/" } }
SPARTANBURG — Cleveland Academy of Leadership Principal Marquice Clark has been named the 2025 South Carolina Elementary School Principal of the Year. Cleveland Academy, a school in Spartanburg County School District 7 has more than 500 students, has shown marked academic improvement under Clark's leadership. The school was created in 1999. He learned Nov. 25 in a surprise announcement in the school board's meeting room. Clark's family, friends, former students and colleagues greeted him as he entered the room. He was overcome with emotion, holding back tears while asking "What is going on?" It wasn't long after he arrived that he discovered why everyone was there. Clark was among 35 applicants who were considered for the award. Quincie Moore, South Carolina Association of School Administrators executive director, presented Clark with the award. He was joined by his wife Brenda and two children, Charlotte and Marquice. An emotional Clark shared his thoughts with the crowd gathered in the room. "None of this was done by one person," Clark said. "Everyone of your prayers your commitment to the children of Cleveland made this happen. Nothing great ever comes from one person's efforts. When I was asked during the interview process what leadership traits you have been able to transcend through the building and what you believe folks in the building model after your leadership I said bravery." Clark said his journey at Cleveland started in 2011. He started as a second grade teacher at the school. Clark served as assistant principal for three years, then principal for the past five. Clark received a Bachelor of Science degree in elementary education with a minor in history from Morris College and later attended Furman University where he go a Master of Arts degree in administration and supervision. Jeff Stevens, District 7 superintendent, was among those who congratulated Clark on his achievement. "As the superintendent, this means all the world to me," Stevens told The Post and Courier after a brief ceremony. "To have someone of his caliber to be leading our schools and to know what his heart is and to know what he means to those kids, it's a game changer for our district. I am happy to have someone like him leading the way at Cleveland Academy." Several members of Clark's family from Sumter County traveled to Spartanburg to attend the announcement. "Dr. Clark has truly revolutionized the Cleveland Academy of Leadership by reducing discipline referrals and dramatically improving academic proficiency. ... He has demonstrated that transformative leadership can yield significant results," Moore said. Spartanburg District 7 voters approve $47M plan to build a new school and expand another Much larger Woodruff High School is taking shape, with targeted completion in 2025None
Prosecutors play undercover recordings of Madigan at former speaker’s corruption trial
B.C. Premier Eby says U.S. tariffs would be 'devastating' for forest industry
Broadcom's AI Edge: 'Apples-to-Oranges Comparison' Shows ASICs Outgrowing GPUs, Says AnalystRam raid rampage as thieves wreak havoc in 30-minute spree
NoneNone