
Remembering poet Nikki Giovanni and her impact on PhiladelphiaMichelle Goth There is always that one dish on the Thanksgiving table – overlooked while the mashed potatoes and gravy steal the spotlight. Surprisingly, this much-maligned side dish has been a part of American tradition for centuries and holds endless possibilities beyond its original purpose, perfectly suited to be reinvented in unexpected ways. Despite its deep roots in Thanksgiving history, this dish remains one of the most polarizing items on the table today. What is this least popular Thanksgiving dish? Cranberry sauce. Cranberry sauce has been part of American cuisine for centuries. Native Americans used cranberries in cooking and medicine, and early European settlers followed suit, incorporating the berries into sauces and preserves. By the 19th century, cranberry sauce became a Thanksgiving tradition, although recipes varied by region. The introduction of canned cranberry sauce in the 20th century made it even more popular, cementing its spot as an expected accompaniment for Thanksgiving turkey on tables across the country. Those who do enjoy cranberry sauce probably have a strong opinion about what form is acceptable. Some people prefer canned cranberry sauce for its nostalgic jiggle, while others insist that fresh, homemade cranberry sauce is the only way to go. Homemade whole berry sauce is the top choice for many, with its texture and flavor offering something truly special. A 2021 survey by the grocery delivery service Instacart found that cranberry sauce is the least favorite Thanksgiving dish, with 29% of American adults saying they hate it and nearly 50% calling it disgusting. This makes cranberry sauce the most polarizing and the most disliked dish on the Thanksgiving table. Even though it ranks as the least favorite dish, omitting cranberry sauce from the holiday meal is still considered a bit of a Thanksgiving faux pas . Regardless of what kind of cranberry sauce graces the table, its lack of popularity guarantees leftovers. But do not let those leftovers go to waste; there are plenty of creative and delicious ways to use cranberry sauce beyond the Thanksgiving table. Leftover cranberry sauce? There is no need to despair. Here are some fun, delicious and inventive ways to give those leftovers a new purpose. Cranberry grilled cheese Cranberry sauce is the perfect addition to a grilled cheese sandwich. Layer sharp cheddar or brie cheese, turkey leftovers and a spoonful of cranberry sauce between two slices of sourdough bread. The tart cranberry cuts through the richness of the cheese for a perfectly balanced bite. To feed a crowd, consider making a casserole dish of turkey cranberry sliders with leftover sauce and turkey meat. Cranberry vinaigrette To make a simple salad dressing, grab a mason jar and add a tablespoon of leftover cranberry sauce. Pour in a splash of white balsamic vinegar and an equal portion of olive oil. Add pinches of fresh herbs, salt and pepper. Secure the lid, shake well and drizzle the zesty cranberry vinaigrette over an autumn salad with lettuce, gorgonzola cheese, pecans and dried cranberries. Cranberry BBQ sauce For an easy homemade barbecue sauce, mix leftover cranberry sauce with ketchup or chili sauce, a dash of hot sauce or Worcestershire sauce, and a bit of brown sugar. The result is a tangy barbecue sauce that pairs beautifully with cocktail meatballs , roasted chicken or pork chops. Guests will never guess that the base of your homemade barbecue sauce was the leftover cranberry sauce from Thanksgiving. Cranberry yogurt parfait For a quick breakfast or snack, layer cranberry sauce with vanilla Greek yogurt and granola for a simple yet elegant autumn parfait. The sweetness of the granola and the tanginess of the cranberry sauce make for a great flavor balance. For bonus points, add a drizzle of maple syrup and pecans. Cocktail mixer Yes, cranberry sauce can be used in cocktails. To make a festive drink, shake a generous spoonful of cranberry sauce with vodka, a splash of orange juice and a squeeze of simple syrup. Add ice and a fresh rosemary sprig, and the result is a tart, refreshing cocktail perfect for the holiday season. Related Articles Restaurants Food and Drink | Pie crust 101: How tos from longtime instructor Restaurants Food and Drink | Tips for Thanksgiving turkey-roasting success Restaurants Food and Drink | Beer pairings for your holiday feasts Restaurants Food and Drink | Make these Tahini-Roasted Sweet Potatoes for Thanksgiving Restaurants Food and Drink | How to eat great food in New Orleans without going broke For centuries, cranberry sauce has been a staple on the American Thanksgiving table. While it continues to be viewed as a traditional holiday dish by most Americans, an emerging trend shows chefs, food bloggers and home cooks finding new ways to incorporate cranberry sauce into various recipes throughout the year. Cranberry sauce may never steal the spotlight during Thanksgiving dinner, where traditional dishes like mashed potatoes, buttery rolls and pies often take center stage. However, its creative uses can elevate it as a standout ingredient in the days that follow. As Thanksgiving cleanup commences and a bowl of leftover sauce remains, there is no need to worry. This underdog simply requires a bit of creativity to shine. Michelle Goth is a professionally trained cook and cookbook author dedicated to celebrating Midwestern cooking traditions. She shares easy recipes for family dinners and holidays at Blackberry Babe .
SANTA CLARA, Calif. , Dec. 10, 2024 /PRNewswire/ -- Marvell Technology, Inc. (NASDAQ: MRVL), a leader in data infrastructure semiconductor solutions, today announced that it has pioneered a new custom HBM compute architecture that enables XPUs to achieve greater compute and memory density. The new technology is available to all of its custom silicon customers to improve the performance, efficiency and TCO of their custom XPUs. Marvell is collaborating with its cloud customers and leading HBM manufacturers, Micron, Samsung Electronics, and SK hynix to define and develop custom HBM solutions for next-generation XPUs. HBM is a critical component integrated within the XPU using advanced 2.5D packaging technology and high-speed industry-standard interfaces. However, the scaling of XPUs is limited by the current standard interface-based architecture. The new Marvell custom HBM compute architecture introduces tailored interfaces to optimize performance, power, die size, and cost for specific XPU designs. This approach considers the compute silicon, HBM stacks, and packaging. By customizing the HBM memory subsystem, including the stack itself, Marvell is advancing customization in cloud data center infrastructure. Marvell is collaborating with major HBM makers to implement this new architecture and meet cloud data center operators' needs. The Marvell custom HBM compute architecture enhances XPUs by serializing and speeding up the I/O interfaces between its internal AI compute accelerator silicon dies and the HBM base dies. This results in greater performance and up to 70% lower interface power compared to standard HBM interfaces. The optimized interfaces also reduce the required silicon real estate in each die, allowing HBM support logic to be integrated onto the base die. These real-estate savings, up to 25%, can be used to enhance compute capabilities, add new features, and support up to 33% more HBM stacks, increasing memory capacity per XPU. These improvements boost XPU performance and power efficiency while lowering TCO for cloud operators. "The leading cloud data center operators have scaled with custom infrastructure. Enhancing XPUs by tailoring HBM for specific performance, power, and total cost of ownership is the latest step in a new paradigm in the way AI accelerators are designed and delivered," said Will Chu, Senior Vice President and General Manager of the Custom, Compute and Storage Group at Marvell. "We're very grateful to work with leading memory designers to accelerate this revolution and, help cloud data center operators continue to scale their XPUs and infrastructure for the AI era." "Increased memory capacity and bandwidth will help cloud operators efficiently scale their infrastructure for the AI era," said Raj Narasimhan, senior vice president and general manager of Micron's Compute and Networking Business Unit. "Strategic collaborations focused on power efficiency, such as the one we have with Marvell, will build on Micron's industry-leading HBM power specs, and provide hyperscalers with a robust platform to deliver the capabilities and optimal performance required to scale AI." "Optimizing HBM for specific XPUs and software environments will greatly improve the performance of cloud operators' infrastructure and ensure efficient power use," said Harry Yoon , corporate executive vice president of Samsung Electronics and head of Americas products and solutions planning. "The advancement of AI depends on such focused efforts. We look forward to collaborating with Marvell, a leader in custom compute silicon innovation." "By collaborating with Marvell, we can help our customers produce a more optimized solution for their workloads and infrastructure," said Sunny Kang , VP of DRAM Technology, SK hynix America. "As one of the leading pioneers of HBM, we look forward to shaping this next evolutionary stage for the technology." "Custom XPUs deliver superior performance and performance per watt compared to merchant, general-purpose solutions for specific, cloud-unique workloads," said Patrick Moorhead , CEO and Founder of Moor Insights & Strategy. "Marvell, already a player in custom compute silicon, is already delivering tailored solutions to leading cloud companies. Their latest custom compute HBM architecture platform provides an additional lever to enhance the TCO for custom silicon. Through strategic collaboration with leading memory makers, Marvell is poised to empower cloud operators in scaling their XPUs and accelerated infrastructure, thereby paving the way for them to enable the future of AI." Marvell and the M logo are trademarks of Marvell or its affiliates. Please visit www.marvell.com for a complete list of Marvell trademarks. Other names and brands may be claimed as the property of others. This press release contains forward-looking statements within the meaning of the federal securities laws that involve risks and uncertainties. Forward-looking statements include, without limitation, any statement that may predict, forecast, indicate or imply future events, results or achievements. Actual events, results or achievements may differ materially from those contemplated in this press release. Forward-looking statements are only predictions and are subject to risks, uncertainties and assumptions that are difficult to predict, including those described in the "Risk Factors" section of our Annual Reports on Form 10-K, Quarterly Reports on Form 10-Q and other documents filed by us from time to time with the SEC. Forward-looking statements speak only as of the date they are made. Readers are cautioned not to put undue reliance on forward-looking statements, and no person assumes any obligation to update or revise any such forward-looking statements, whether as a result of new information, future events or otherwise. For further information, contact: Kim Markle pr@marvell.com View original content to download multimedia: https://www.prnewswire.com/news-releases/marvell-announces-breakthrough-custom-hbm-compute-architecture-to-optimize-cloud-ai-accelerators-302328144.html SOURCE Marvell
New Aquila DSP Delivers Cost, Power, and Scalability for 2 km to 20 km Connectivity, Extending Marvell Optical Interconnect Leadership SANTA CLARA, Calif. , Dec. 10, 2024 /PRNewswire/ -- Marvell Technology, Inc . (NASDAQ: MRVL), a leader in data infrastructure semiconductor solutions, today announced Marvell® Aquila , the industry's first coherent-lite DSP optimized for 1.6 Tbps coherent optical transceiver modules operating at O-band wavelengths. By combining advanced coherent modulation with scalable O-band optics, the Aquila DSP delivers a power and performance-optimized solution tailored for the emerging market for distributed campus data center interconnects spanning up to 20 km with high bandwidth and low latency. The industry is shifting from large-scale facilities to campus-based data centers due to power and space constraints. While PAM4 interconnects remain the standard for inside data center connections and coherent data center interconnect (DCI) interconnects address regional data center connectivity, both areas where Marvell is the industry leader, campus-based data centers require optimized interconnects spanning 2-20 km, driving the need for coherent-lite technology. Marvell, leveraging its unique expertise in both PAM4 and coherent DSPs, is leading this market transformation. Traditional coherent DSPs are optimized for C-band tunable optics, which lack the scalability needed for high-volume data center deployment. The new Aquila coherent-lite DSP introduces an innovative O-band coherent architecture that delivers cost efficiency, power savings, and scalability, enabling the next generation of campus-based data center connectivity. "Interconnect bandwidth, data center traffic, and data center capacity needs are all growing at accelerated rates because of AI, and operators are limited by the available power delivery in a single building," said Xi Wang , vice president of product marketing for Optical Connectivity at Marvell. "Aquila offers data center operators a new, groundbreaking avenue for optimizing their infrastructure for sustainability and developing campus facilities that can scale with their customers' demands for cloud and AI services." "The transition to distributed data centers is creating a growing demand for innovative solutions to address campus connectivity challenges," said Osa Mok , chief marketing officer at TeraHop Ltd. (previously known as InnoLight Technology). "Marvell's Aquila represents a significant step forward, bringing coherent technologies to this evolving market. By combining the advancements from Aquila with TeraHop's expertise in coherent modules and scalable optical solutions, we are establishing a new standard for performance and efficiency in campus networks." "Shipments of coherent-lite solutions are expected to grow from sample volumes this year to over 1 million units per year by 2029," said Vlad Kozlov , founder and CEO of LightCounting. "Coherent-lite technology like Aquila from Marvell expands the options available to hyperscalers, providing a more energy-efficient solution to an emerging and critical use case." Aquila is one of the latest members of the Marvell interconnect portfolio, optimized for specific use cases to help data centers maximize the utilization and performance of their infrastructure while reducing overall cost and power per bit. The extensive 1.6 Tbps portfolio also includes the Marvell LPO TIA and driver chipset; Ara , the industry's first 3nm PAM4 interconnect platform; Nova family of PAM4 DSPs featuring 200 Gbps electrical and optical interfaces; and Alaska® A PAM4 DSP for active electrical cables. Aquila Coherent-lite DSP Attributes Availability The Marvell Aquila coherent-lite DSP is sampling to select customers. About Marvell To deliver the data infrastructure technology that connects the world, we're building solutions on the most powerful foundation: our partnerships with our customers. Trusted by the world's leading technology companies for over 25 years, we move, store, process and secure the world's data with semiconductor solutions designed for our customers' current needs and future ambitions. Through a process of deep collaboration and transparency, we're ultimately changing the way tomorrow's enterprise, cloud, automotive, and carrier architectures transform—for the better. Marvell and the M logo are trademarks of Marvell or its affiliates. Please visit www.marvell.com for a complete list of Marvell trademarks. Other names and brands may be claimed as the property of others. This press release contains forward-looking statements within the meaning of the federal securities laws that involve risks and uncertainties. Forward-looking statements include, without limitation, any statement that may predict, forecast, indicate or imply future events, results or achievements. Actual events, results or achievements may differ materially from those contemplated in this press release. Forward-looking statements are only predictions and are subject to risks, uncertainties and assumptions that are difficult to predict, including those described in the "Risk Factors" section of our Annual Reports on Form 10-K, Quarterly Reports on Form 10-Q and other documents filed by us from time to time with the SEC. Forward-looking statements speak only as of the date they are made. Readers are cautioned not to put undue reliance on forward-looking statements, and no person assumes any obligation to update or revise any such forward-looking statements, whether as a result of new information, future events or otherwise. For further information, contact: Kim Markle pr@marvell.com View original content to download multimedia: https://www.prnewswire.com/news-releases/marvell-unveils-industrys-first-coherent-lite-1-6-tbps-o-band-optimized-dsp-for-data-center-campus-connectivity-302328132.html SOURCE Marvell
Saba Capital Management buys $3.89 million in PIMCO Dynamic Income Strategy Fund stock
Cousins Properties Announces Public Offering of 9,500,000 Shares of Common StockDoctored images have been around for decades. The term "Photoshopped" is part of everyday language. But in recent years, it has seemingly been replaced by a new word: deepfake. It's almost everywhere online, but you likely won't find it in your dictionary at home. What exactly is a deepfake, and how does the technology work? RELATED STORY | Scripps News Reports: Sex, Lies, and Deepfakes A deepfake is an image or video that has been generated by artificial intelligence to look real. Most deepfakes use a type of AI called a "diffusion model." In a nutshell, a diffusion model creates content by stripping away noise. "With diffusion models, they found a very clever way of taking an image and then constructing that procedure to go from here to there," said Lucas Hansen said. He and Siddharth Hiregowdara are cofounders of CivAI, a nonprofit educating the public on the potential — and dangers — of AI. How diffusion models work It can get complicated, so imagine the AI – or diffusion model – as a detective trying to catch a suspect. Like a detective, it relies on its experience and training. It recalls a previous case -– a sneaky cat on the run. Every day it added more and more disguises. On Monday, no disguise. Tuesday, it put on a little wig. Wednesday, it added some jewelry. By Sunday, it's unrecognizable and wearing a cheeseburger mask. The detective learned these changes can tell you what it wore and on what day. AI diffusion models do something similar with noise, learning what something looks like at each step. "The job of the diffusion model is to remove noise," Hiregowdara said. "You would give the model this picture, and then it will give you a slightly de-noised version of this picture." RELATED STORY | Scripps News got deepfaked to see how AI could impact elections When it's time to solve the case and generate a suspect, we give it a clue: the prompts we give when we create an AI-generated image. "We have been given the hint that this is supposed to look like a cat. So what catlike things can we see in here? Okay, we see this curve, maybe that's an ear," Hiregowdara said. The "detective" works backward, recalling its training. It sees a noisy image. Thanks to the clue, it is looking for a suspect — a cat. It subtracts disguises (noise) until it finds the new suspect. Case closed. Now imagine the "detective" living and solving crimes for years and years. It learns and studies everything — landscapes, objects, animals, people, anything at all. So when it needs to generate a suspect or an image, it remembers its training and creates an image. Deepfakes and faceswaps Many deepfake images and videos employ some type of face swapping technology. You've probably experienced this kind of technology already — faceswapping filters like on Snapchat, Instagram or Tiktok use technology similar to diffusion models, recognizing faces and replacing things in real time. "It will find the face in the image and then cut that out kind of, then take the face and convert it to its internal representation," Hansen said. The results are refined then repeated frame by frame. The future and becoming our own detectives As deepfakes become more and more realistic and tougher to detect, understanding how the technology works at a basic level can help us prepare for any dangers or misuse. Deepfakes have already been used to spread election disinformation, create fake explicit images of a teenager, even frame a principal with AI-created racist audio. "All the netizens on social media also have a role to play," Siwei Lyu said. Lyu is a SUNY Empire Innovation Professor at the University of Buffalo's Department of Computer Science and Engineering, and the director of the Media Forensics Lab. His team has created a tool to help spot deepfakes called "DeepFake-o-meter." "We do not know how to handle, how to deal, with these kinds of problems. It's very new. And also requires technical knowledge to understand some of the subtleties there," Lyu said. "The media, the government, can play a very active role to improve user awareness and education. Especially for vulnerable groups like seniors, the kids, who will start to understand the social media world and start to become exposed to AI technologies. They can easily fall for AI magic or start using AI without knowing the limits." RELATED STORY | AI voice cloning: How programs are learning to pick up on pitch and tone Both Lyu and CivAI believe in exposure and education to help combat any potential misuse of deepfake technology. "Our overall goal is that we think AI is going t impact pretty much everyone in a lot of different ways," Hansen said. "And we think that everyone should be aware of the ways that it's going to change them because it's going to impact everyone." "More than just general education — just knowing the facts and having heard what's going to happen," he added. "We want to give people a really intuitive experience of what's going on." Hansen goes on to explain CivAI's role in educating the public. "We try and make all of our demonstrations personalized as much as possible. What we're working on is making it so people can see it themselves. So they know it's real, and they feel that it's real," Hansen said. "And they can have a deep gut level feel for tthe impact that it's going to have." "A big part of the solution is essentially just going to be education and sort of cultural changes," he added. "A lot of this synthetic content is sort of like a new virus that is attacking society right now, and people need to become immune to it in some ways. They need to be more suspicious about what's real and what's not, and I think that will help a lot as well."
Minnesota Vikings Return to Near-Full Health on ThursdayNew Marvell AI accelerator (XPU) architecture enables up to 25% more compute, 33% greater memory while improving power efficiency. Marvell collaborating with Micron, Samsung and SK hynix on custom high-bandwidth memory (HBM) solutions to deliver custom XPUs. Architecture comprises advanced die-to-die interfaces, HBM base dies, controller logic and advanced packaging for new XPU designs. SANTA CLARA, Calif. , Dec. 10, 2024 /PRNewswire/ -- Marvell Technology, Inc. (NASDAQ: MRVL ), a leader in data infrastructure semiconductor solutions, today announced that it has pioneered a new custom HBM compute architecture that enables XPUs to achieve greater compute and memory density. The new technology is available to all of its custom silicon customers to improve the performance, efficiency and TCO of their custom XPUs. Marvell is collaborating with its cloud customers and leading HBM manufacturers, Micron, Samsung Electronics, and SK hynix to define and develop custom HBM solutions for next-generation XPUs. HBM is a critical component integrated within the XPU using advanced 2.5D packaging technology and high-speed industry-standard interfaces. However, the scaling of XPUs is limited by the current standard interface-based architecture. The new Marvell custom HBM compute architecture introduces tailored interfaces to optimize performance, power, die size, and cost for specific XPU designs. This approach considers the compute silicon, HBM stacks, and packaging. By customizing the HBM memory subsystem, including the stack itself, Marvell is advancing customization in cloud data center infrastructure. Marvell is collaborating with major HBM makers to implement this new architecture and meet cloud data center operators' needs. The Marvell custom HBM compute architecture enhances XPUs by serializing and speeding up the I/O interfaces between its internal AI compute accelerator silicon dies and the HBM base dies. This results in greater performance and up to 70% lower interface power compared to standard HBM interfaces. The optimized interfaces also reduce the required silicon real estate in each die, allowing HBM support logic to be integrated onto the base die. These real-estate savings, up to 25%, can be used to enhance compute capabilities, add new features, and support up to 33% more HBM stacks, increasing memory capacity per XPU. These improvements boost XPU performance and power efficiency while lowering TCO for cloud operators. "The leading cloud data center operators have scaled with custom infrastructure. Enhancing XPUs by tailoring HBM for specific performance, power, and total cost of ownership is the latest step in a new paradigm in the way AI accelerators are designed and delivered," said Will Chu, Senior Vice President and General Manager of the Custom, Compute and Storage Group at Marvell. "We're very grateful to work with leading memory designers to accelerate this revolution and, help cloud data center operators continue to scale their XPUs and infrastructure for the AI era." "Increased memory capacity and bandwidth will help cloud operators efficiently scale their infrastructure for the AI era," said Raj Narasimhan, senior vice president and general manager of Micron's Compute and Networking Business Unit. "Strategic collaborations focused on power efficiency, such as the one we have with Marvell, will build on Micron's industry-leading HBM power specs, and provide hyperscalers with a robust platform to deliver the capabilities and optimal performance required to scale AI." "Optimizing HBM for specific XPUs and software environments will greatly improve the performance of cloud operators' infrastructure and ensure efficient power use," said Harry Yoon , corporate executive vice president of Samsung Electronics and head of Americas products and solutions planning. "The advancement of AI depends on such focused efforts. We look forward to collaborating with Marvell, a leader in custom compute silicon innovation." "By collaborating with Marvell, we can help our customers produce a more optimized solution for their workloads and infrastructure," said Sunny Kang , VP of DRAM Technology, SK hynix America. "As one of the leading pioneers of HBM, we look forward to shaping this next evolutionary stage for the technology." "Custom XPUs deliver superior performance and performance per watt compared to merchant, general-purpose solutions for specific, cloud-unique workloads," said Patrick Moorhead , CEO and Founder of Moor Insights & Strategy. "Marvell, already a player in custom compute silicon, is already delivering tailored solutions to leading cloud companies. Their latest custom compute HBM architecture platform provides an additional lever to enhance the TCO for custom silicon. Through strategic collaboration with leading memory makers, Marvell is poised to empower cloud operators in scaling their XPUs and accelerated infrastructure, thereby paving the way for them to enable the future of AI." Marvell and the M logo are trademarks of Marvell or its affiliates. Please visit www.marvell.com for a complete list of Marvell trademarks. Other names and brands may be claimed as the property of others. This press release contains forward-looking statements within the meaning of the federal securities laws that involve risks and uncertainties. Forward-looking statements include, without limitation, any statement that may predict, forecast, indicate or imply future events, results or achievements. Actual events, results or achievements may differ materially from those contemplated in this press release. Forward-looking statements are only predictions and are subject to risks, uncertainties and assumptions that are difficult to predict, including those described in the "Risk Factors" section of our Annual Reports on Form 10-K, Quarterly Reports on Form 10-Q and other documents filed by us from time to time with the SEC. Forward-looking statements speak only as of the date they are made. Readers are cautioned not to put undue reliance on forward-looking statements, and no person assumes any obligation to update or revise any such forward-looking statements, whether as a result of new information, future events or otherwise. For further information, contact: Kim Markle [email protected] SOURCE Marvell
NoneDid you know with a Digital Subscription to Edinburgh News, you can get unlimited access to the website including our premium content, as well as benefiting from fewer ads, loyalty rewards and much more. Hibs boss David Gray has backed one of his team’s brightest young prospects to force his way back into the starting XI. And the gaffer insists teen talent Rudi Molotnikov remains very much part of his plans – despite a drastic reduction in playing time after an all-action start to the season. Virtually a guaranteed starter at the beginning of the campaign, the 18-year-old hasn’t started in any of the last eight games for Hibs. But the manager says the Scotland Under-19 star, who missed the weekend trip to Celtic Park with a minor groin injury likely to clear up in time for Saturday’s visit of Ross County, is still in his thinking. Advertisement Advertisement “He’s in the first-team squad, he's in the dressing room now, he's around the first team every single day and that's because of how he adapted right at the start of the season,” said Gray. “He did really well when he came in and continues to do well in training. CHECK OUT THE LATEST FITBAW TALK ON SHOTS TV “But we've got a lot of competition now in his position; players are getting fitter, and when you find yourself in the position we are, sometimes you need a bit of experience as well. Rudi has dealt with that (not playing much) really well. “He will 100 per cent come again; he's still heavily in my thoughts every single week. Even when he's not been playing, he comes into the conversations because of how well he does in training and what he was giving us early in the season and I still believe he's got a really bright future for us.” Gray has been impressed by how Molotnikov, equally comfortable on the wing or in the No. 10 position, has used his time on the training ground to watch and learn from veterans like Canadian international Junior Hoilett. The manager, glad to see the youngster making up lost ground after a frustrating loan spell with Stirling Albion last season, said: “Yes, I see him learning from the senior boys every day. Not just Junior, you've got Dwight Gayle as well, Martin Boyle, all the senior boys in the group. Advertisement Advertisement “We’ve got a lot of good attacking players in those positions. Rudi can play a variety of positions as well, but he also just loves playing football. He's such a young boy, enjoying his training, enjoying the opportunity to play for Hibs. GET THE LATEST HIBS NEWS DIRECTLY TO YOUR EMAIL INBOX DAILY WITH OUR FREE NEWSLETTER SERVICE “If you'd asked him at the start of the season if he thought he'd play as many games as he has, he'd probably say no, but it's a credit to him for how well he's done. He's someone who I identified last season as someone who I really liked. “He was doing well last season, and I always knew he could deal with the physicality of the Scottish Premiership, and he’s come a long way since going on loan and not playing much last season. He can take a lot of confidence from, and belief in what he's doing and as I've already said, he has a really bright future - he just needs to keep working hard and keep improving, which he's desperate to do.”HP Is Offering an RTX 4060 Gaming PC for $619.99
mother to , is no stranger to making waves on social media. But her latest move has got unfortunately got folks talking and not in the best way. I’ll explain. As the news about , in which he’s accused of raping a minor alongside Sean “Diddy” Combs back in 2000, continues to circulate in the news cycle—it appeared that Knowles liked a particular post about it on Instagram over the weekend. The post, , has garnered over 2,800 likes and over 400 comments but it was the famous matriarch’s engagement that stood out amongst the crowd. However, things are not what they seem, let Knowles tell it. Less than 24 hours after reports of her liking the post made headlines, she to make it clear that she didn’t like the post after all and that people need to “stop playing with her.” “I was hacked! As you all know, I do not play about my family. So if you see something uncharacteristic of me just know that it is not me!” the post read with a further warning in the caption: “No weapon formed against me shall prosper.” In the comments section, friends and followers lobbed their support behind her with acclaimed marketing executive Bozoma Saint John writing: “And we don’t play about YOU! Added famed costume designer and creative director June Ambrose, “Amen ‘no weapons.’” Veteran actress Holly Robinsonson Peete also chimed in by simply adding prayer hands emojis. However, over on X/Twitter, folks weren’t so understanding: “I know tina knowles hacker is tired of being blamed for her messiness,” one user. “Nobody gets their IG hacked more often than Tina Knowles,” another . “Who sitting there saying ‘Let’s go hack Tina Knowles’ Don’t piss me OFFF,” another.3 Dividend Powerhouses Offering Yields Up To 9.4%
Stock market update: Nifty Bank index 0.41% in a weak market
ALTOONA, Pa. — The suspect in the killing of UnitedHealthcare’s CEO struggled with deputies and shouted Tuesday while arriving for a court appearance in Pennsylvania, a day after he was arrested at a McDonald’s and charged with murder. Luigi Nicholas Mangione emerged from a patrol car, spun toward reporters and shouted something partly unintelligible referring to an “insult to the intelligence of the American people” while deputies pushed him inside. Prosecutors began to take steps to bring Mangione back to New York to face a murder charge while new details emerged about his life and how he was captured. The 26-year-old Ivy League graduate from a prominent Maryland real estate family was charged with murder hours after he was arrested in the Manhattan killing of Brian Thompson, who led the United States’ largest medical insurance company. At the brief hearing, defense lawyer Thomas Dickey informed the court that Mangione would not waive extradition to New York but instead wants a hearing on the issue. He has 14 days to challenge detention. Mangione likely was motivated by his anger with what he called “parasitic” health insurance companies and a disdain for corporate greed, a law enforcement bulletin obtained by The Associated Press said. He wrote that the U.S. has the most expensive health care system in the world and that profits of major corporations continue to increase while “our life expectancy” does not, according to the bulletin, based on a review of his hand-written notes and social media posts. Mangione called “Unabomber” Ted Kaczynski a “political revolutionary” and may have found inspiration from the man who carried out a series of bombings while railing against modern society and technology, according to the bulletin. Mangione remained jailed in Pennsylvania, where he was initially charged with possession of an unlicensed firearm, forgery and providing false identification to police. Manhattan prosecutors obtained an arrest warrant, a step that could help expedite his extradition from Pennsylvania. Mangione was arrested Monday in Altoona, Pennsylvania — about 230 miles west of New York City — after a McDonald’s customer recognized him and notified an employee, authorities said. Thompson, 50, was killed Wednesday as he walked alone to a Manhattan hotel for an investor conference. Police saw the shooting as a targeted attack. Get local news delivered to your inbox!Liberals’ holiday tax break and cash giveaway has winners and losers
A fugitive gains fame in New Orleans eluding dart guns and netsAP News Summary at 6:04 p.m. EST