首页 > 646 jili 777

bet 365 live stream

2025-01-13
bet 365 live stream
bet 365 live stream The Government of India, On Wednesday, announced a significant reshuffle of senior Indian Administrative Service (IAS) officers, with multiple key appointments made across several important departments. Rachna Shah, IAS, who was serving as the Secretary of the Ministry of Textiles, has been appointed as the new Secretary of the Department of Personnel and Training (DoPT). This department is responsible for managing India’s civil services. Shah succeeds Neelam Shammi Rao, IAS, who moves to the Ministry of Textiles, taking over from Shah in this role. Shah, a graduate in Business Economics from Delhi University, has extensive experience, having worked in several important ministries, including Science and Technology and Corporate Affairs. Also Read: Amit Shah launches 10,000 new PACS, sets target to establish 2 lakh cooperatives Arunish Chawla, IAS, who previously held the position of Secretary in the Department of Pharmaceuticals, Ministry of Chemicals and Fertilizers, has been appointed as Secretary of the Department of Revenue, Ministry of Finance. Chawla replaces Sanjay Malhotra, who resigned following his appointment as the Governor of the Reserve Bank of India. In addition to his new role, Chawla will also continue to hold the additional charge of Secretary at the Ministry of Culture until a permanent appointment is made. Chawla, who has served in various senior positions, including as Additional Chief Secretary for Urban Development and Housing in Bihar, brings with him a wealth of experience in administration and urban planning. Vineet Joshi, IAS, formerly serving as the Chief Secretary of Manipur, has been appointed as the new Secretary of the Department of Higher Education in the Ministry of Education. Joshi, who holds a degree in mechanical engineering from IIT Kanpur and an MBA from the Indian Institute of Foreign Trade, has a long history in civil service. He has previously worked as Additional Secretary in the Ministry of Education and also led the National Testing Agency (NTA) as its Director from 2018. Sanjay Sethi, IAS, has been appointed Secretary of the National Commission for Minorities (NCM), Ministry of Minority Affairs, in the rank and pay of Secretary to the Government of India. Sethi takes over from Neelam Shammi Rao, who has been reassigned to the Ministry of Textiles. Also Read: Allu Arjun, 'Pushpa' team give ₹2 crore aid after Hyderabad theatre stampede Sethi has had an extensive career in administrative roles, including serving as the Additional Municipal Commissioner of Projects in the Municipal Corporation of Greater Mumbai (MCGM), as well as Municipal Commissioner of Thane and Nagpur. He also held the position of CEO of the Maharashtra Industrial Development Corporation (MIDC). Amit Agrawal, IAS, who is currently the Chief Executive Officer of the Unique Identification Authority of India (UIDAI), has been appointed as the new Secretary of the Department of Pharmaceuticals, Ministry of Chemicals and Fertilizers. Agrawal succeeds Chawla, taking over the role with immediate effect. Agrawal, a graduate of IIT Kanpur, has held several senior positions, including Additional Secretary in both the Ministry of Electronics and Information Technology (MeitY) and the Ministry of Finance. His experience also includes a tenure as Finance Secretary in the state government of Chhattisgarh. Neerja Sekhar, IAS, who was serving as Special Secretary in the Ministry of Information and Broadcasting, has been appointed Director General of the National Productivity Council (NPC). The NPC, which operates under the Department for Promotion of Industry and Internal Trade, works to enhance productivity across various sectors of the Indian economy. Sekhar, an experienced bureaucrat, brings valuable expertise to her new position as she continues to contribute to India’s economic development. Also Read: Ola Electric expands network by fourfold to 4,000 stores nationwide

Scientists create Iron Man-like robot to help paraplegics walk



"Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum." Section 1.10.32 of "de Finibus Bonorum et Malorum", written by Cicero in 45 BC "Sed ut perspiciatis unde omnis iste natus error sit voluptatem accusantium doloremque laudantium, totam rem aperiam, eaque ipsa quae ab illo inventore veritatis et quasi architecto beatae vitae dicta sunt explicabo. Nemo enim ipsam voluptatem quia voluptas sit aspernatur aut odit aut fugit, sed quia consequuntur magni dolores eos qui ratione voluptatem sequi nesciunt. Neque porro quisquam est, qui dolorem ipsum quia dolor sit amet, consectetur, adipisci velit, sed quia non numquam eius modi tempora incidunt ut labore et dolore magnam aliquam quaerat voluptatem. Ut enim ad minima veniam, quis nostrum exercitationem ullam corporis suscipit laboriosam, nisi ut aliquid ex ea commodi consequatur? Quis autem vel eum iure reprehenderit qui in ea voluptate velit esse quam nihil molestiae consequatur, vel illum qui dolorem eum fugiat quo voluptas nulla pariatur?" 1914 translation by H. Rackham "But I must explain to you how all this mistaken idea of denouncing pleasure and praising pain was born and I will give you a complete account of the system, and expound the actual teachings of the great explorer of the truth, the master-builder of human happiness. No one rejects, dislikes, or avoids pleasure itself, because it is pleasure, but because those who do not know how to pursue pleasure rationally encounter consequences that are extremely painful. Nor again is there anyone who loves or pursues or desires to obtain pain of itself, because it is pain, but because occasionally circumstances occur in which toil and pain can procure him some great pleasure. To take a trivial example, which of us ever undertakes laborious physical exercise, except to obtain some advantage from it? But who has any right to find fault with a man who chooses to enjoy a pleasure that has no annoying consequences, or one who avoids a pain that produces no resultant pleasure?" 1914 translation by H. Rackham "But I must explain to you how all this mistaken idea of denouncing pleasure and praising pain was born and I will give you a complete account of the system, and expound the actual teachings of the great explorer of the truth, the master-builder of human happiness. No one rejects, dislikes, or avoids pleasure itself, because it is pleasure, but because those who do not know how to pursue pleasure rationally encounter consequences that are extremely painful. Nor again is there anyone who loves or pursues or desires to obtain pain of itself, because it is pain, but because occasionally circumstances occur in which toil and pain can procure him some great pleasure. To take a trivial example, which of us ever undertakes laborious physical exercise, except to obtain some advantage from it? But who has any right to find fault with a man who chooses to enjoy a pleasure that has no annoying consequences, or one who avoids a pain that produces no resultant pleasure?" Thanks for your interest in Kalkine Media's content! To continue reading, please log in to your account or create your free account with us.Duke's Diaz: QB Murphy faces internal discipline for raising middle fingers in Virginia Tech win

Syrian government services come to a 'complete halt' as state workers stay home

Over 4.6L NOTA Votes in Maha, Outshines Over 2,600 Candidates

Elon Musk's Neuralink gets approval to test whether its brain chip can control a robotic arm

The Commercial Bank of Ceylon has announced a partnership with AgStar PLC to promote smart agriculture machinery and equipment via the bank’s Diribala Green Development Loan Scheme. A Memorandum of Understanding signed by the two companies, paves the way for farmers who obtain loans from the bank to purchase inter-cultivators and irrigation systems from AgStar to receive exclusive discounts and other benefits. The bank said AgStar would offer a 5% discount to the bank’s customers, as well as free advisory services for potential buyers and free installation under this agreement. Smart agriculture equipment can help farmers adapt to climate variability by providing tools for precision watering, soil management, and pest control. These practices make farms more resilient to drought, extreme weather and other climate impacts, protecting farmers’ investments and yields. Beyond just lending money, this partnership extends to training, after-sales support, and maintenance of the equipment ensuring that farmers know how to effectively use and maintain the technology, which maximises productivity and extends the equipment’s lifespan, the bank said. Commercial Bank’s Diribala Green Development Loan Scheme aims to encourage sustainable agricultural practices. It supports the purchase of smart, eco-friendly machinery, that helps farmers adopt practices that are less harmful to the environment, such as precision agriculture which uses resources more efficiently and reduces waste. The bank has also launched several other initiatives to drive the adoption of modern and smart agriculture practices in Sri Lanka to improve productivity, address manpower shortages, reduce costs, and promote sustainability and food security.The future of software-development jobs is changing rapidly as more companies adopt AI tools that can and between junior- and senior-level developers. Increased AI adoption could be part of the tech industry's " ," which has seen slumps in hiring and recruitment over the past year. Yet integrating AI into workflows can offer developers the tools to focus on creative problem-solving and building new features. On November 14, Business Insider convened a roundtable of software developers as part of our " " series to learn how artificial intelligence was changing their jobs and careers. The conversation was moderated by Julia Hood and Jean Paik from BI's Special Projects team. These developers discussed the shifts in their day-to-day tasks, which skills people would need to stay competitive in the industry, and how they navigate the expectations of stakeholders who want to stay on the cutting edge of this new technology. Panelists said AI has boosted their productivity by helping them , which has freed up their time for higher-order problems, such as designing software and devising integration strategies. However, they emphasized that some of the basics of software engineering — learning programming languages, scaling models, and handling large-scale data — would . The roundtable participants also said developers could provide critical insight into challenges around and . The roundtable participants were: I think the expectations that are out there in the market for developers on the use of AI are actually almost a bigger impact than the AI itself. You hear about how generative AI is sort of solving this blank-paper syndrome. Humans have this concept that if you give them a blank paper and tell them to go write something, they'll be confused forever. And generative AI is helping overcome that. The expectation from executives now is that developers are going to be significantly faster but that some of the creative work the developers are doing is going to be taken away — which we're not necessarily seeing. We're seeing it as more of a boilerplate creation mechanism for efficiency gains. I joined Amazon two years ago, and I've seen how my productivity has changed. I don't have to focus on doing repetitive tasks. I can just ask Amazon Q chat to do that for me, and I can focus on more-complex problems that can actually impact our stakeholders and our clients. I can focus on higher-order problems instead of more-repetitive tasks for which the code is already out there internally. One of the big things I've noticed with writing code is how open companies have become to AI tools like and and how integrated they've become into the software-development cycle. It's no longer considered a no-no to use AI tools like ChatGPT. I think two years ago when ChatGPT came out, it was a big concern that you should not be putting your code out there. But now companies have kind of embraced that within the software-development cycle. Looking back at smartphones and Google Maps, it's hard to remember how the world looked like before these technologies. It's a similar situation with gen AI — I can't remember how I was solving the problem without it. I can focus more on actual work. Now I use AI as a kind of assisted tool. My main focus at work is on requirement gathering, like software design. When it comes to the coding, it's going to be very quick. Previously, it could take weeks. Now it's a matter of maybe one or two days, so then I can actually focus on other stuff as AI is solving the rest for me. In my role, it's been trying to help my team rethink their roles and not see AI as a threat but more as a partner that can help boost productivity, and encouraging my team to make use of some of the new embedded AI and gen-AI tools. Really helping my team upskill and putting learning paths in place so that people can embrace AI and not be afraid of it. More of the junior-level developers are really afraid about AI replacing them. : At Nice, we have something like 3,000 developers, and over the last, I think, 24 months, 650 of them have shifted into AI-specific roles, which was sort of unheard of before. Even out of those 650, we've got about a hundred who are experts at things like prompt engineering. Over 20% of our developers are not just developers being supported by AI but developers using AI to write features. I think one of the biggest things I've noticed in the last two to three years is the rise of a job title called " ," which did not exist before, and it's kind of in between an ML engineer and a traditional software engineer. I'm starting to see more and more companies where AI engineer is one of the top-paying jobs available for software engineers. One of the cool things about this job is that you don't need an ML-engineering background, which means it's accessible to a lot more people. For developers who are relatively new or code-literate knowledge workers, I think they can now use code to solve problems where previously they might not have. We have designers internally that are now creating full-blown interactive UIs using AI to describe what they want and then providing that to engineers. They've never been able to do that before, and it greatly accelerates the cycle. For more-experienced developers, I think there are a huge number of things that we still have to sort out: the architectures of these solutions, how we're actually going to implement them in practice. The nature of testing is going to have to change a lot as we start to include these applications in places where they're more mission-critical. On the other side, looking at threats that can come out of AI, new technologies and new positions can emerge as well. We don't currently have clear in terms of ownership or the issues related to gen AI, so I imagine there will be more positions in terms of ethics. I feel like a anymore to be a software developer. If you have some foundational ML, NLP knowledge, you can target some of these ML-engineer or AI-engineer roles, which gives you a great opportunity to be in the market. I'm seeing new career paths in specialized fields around ML and LLM operations. For my developers, they're able to focus more on strategy and system design and creative problem-solving, and it seems to help them move faster into architecture. System design, system architecture, and integration strategies — they have more time to do that because of AI. I think a developer operating an AI system requires product-level understanding of what you're trying to build at a high level. And I think a lot of developers struggle with prompt engineering from that perspective. Having the skills to clearly articulate what you want to an is a very important skill. Developers need to understand machine-learning concepts and how AI models work, not necessarily how to build and train these models from scratch but how to use them effectively. As we're starting to use Amazon Q, I've realized that our developers are now becoming because you have to get that prompt right in order to get the best results from your gen-AI system. Understanding how to communicate with these models is very different. I almost think that it imparts a need for engineers to have a little bit more of a product lens, where a deeper understanding of the actual business problem they're trying to solve is necessary to get the most out of it. Developing evaluations that you can use to optimize those prompts, so going from prompt engineering to actually tuning the prompts in a more-automated way, is going to emerge as a more common approach. Prompt engineering is really important. That's how you interact with AI systems, but this is something that's . Software development will change in five years much more rapidly than anything we've seen before. How you architect, develop, test, and maintain software — that will all change, and how exactly you interact with AI will also evolve. I think prompt engineering is more of a sign that some developers have the desire to learn and are eager to figure out how to interact with artificial intelligence, but it won't necessarily be how you interact with AI in three years or five years. Software developers will need this desire to adapt and learn and have the ability to solve hard problems. As a software developer, some of the basics won't change. You need to understand how to scale models, build scalable solutions, and handle large-scale data. When you're training an AI model, you need data to support it. Knowledge of a programming language would be helpful, specifically Python or even JavaScript. Knowledge of ML or some familiarity with ML will be really helpful. Another thing is that we need to make sure our applications are a lot more fault-tolerant. That is also a skill that front-end or back-end engineers who want to transition to an AI-engineering role need to be aware of. One of the biggest problems with prompts is that the answers can be very unpredictable and can lead to a lot of different outputs, even for the same prompt. So being able to make your application fault-tolerant is one of the biggest skills we need to apply in AI engineering. Part of the issue is that interacting with ChatGPT or cloud AI is so easy and natural that it can be surprising how hard it is actually to control AI behavior, where you need AI to understand constraints, have access to the right information at the right time, and understand the task. When setting expectations with stakeholders, it is important they understand that we're working with this very advanced technology and they are realistic about the risk profile of the project. One is helping them understand the trade-offs. It could be security versus innovation or speed versus accuracy. The second is metrics. Is it actually improving the efficiency? How much is the acceptance rate for our given product? Communicating all those to the stakeholders gives them an idea of whether the product they're using is making an impact or if it's actually helping the team become more productive. Some of the challenges I'm seeing are mainly around ethical AI concerns, , and costly and that go against budget and infrastructure constraints. On the vendor or stakeholder side, it's really more about educating our nontechnical stakeholders about the capabilities of AI and the limitations and trying to set realistic expectations. We try to help our teams understand for their specific business area how AI can be applied. So how can we use AI in marketing or HR or legal, and giving them real-world use cases. Gen AI is really important, and it's so easy to use ChatGPT, but what we find is that gen AI makes a good developer better and a worse developer worse. Good developers understand how to write good code and how good code integrates into projects. ChatGPT is just another tool to help write some of the code that fits into the project. That's the big challenge that we try to make sure our executives understand, that not everybody can use this in the most effective manner. There are some practical governance concerns that have emerged. One is understanding the tolerance for bad responses in certain contexts. Some problems, you may be more willing to accept a bad response because you structure the interface in such a way that there's a human in the loop. If you're attempting to not have a human in the loop, that could be problematic depending on what you want the model to do. Just getting better muscle for the organization to have a good intuition about where these models can potentially fail and in what ways. In addition to that, understanding what training data went into that model, especially as models are used more as agents and have to different applications and data sources that might be pretty sensitive. I think one of the biggest challenges that can happen is how companies use the data that comes back from LLM models and how they're going to use it within the application. Removing the human component scares me a lot. It's automation versus augmentation. There are a lot of cases where augmentation is the big gain. I think automation is a very small, closed case — there are very few things I think LLMs are ready in the world right now to automate. Read the original article on

This article is part of " CXO AI Playbook " — straight talk from business leaders on how they're testing and using AI. The future of software-development jobs is changing rapidly as more companies adopt AI tools that can accelerate the coding process and close experience gaps between junior- and senior-level developers. Increased AI adoption could be part of the tech industry's " white-collar recession ," which has seen slumps in hiring and recruitment over the past year. Yet integrating AI into workflows can offer developers the tools to focus on creative problem-solving and building new features. Advertisement On November 14, Business Insider convened a roundtable of software developers as part of our " CXO AI Playbook " series to learn how artificial intelligence was changing their jobs and careers. The conversation was moderated by Julia Hood and Jean Paik from BI's Special Projects team. These developers discussed the shifts in their day-to-day tasks, which skills people would need to stay competitive in the industry, and how they navigate the expectations of stakeholders who want to stay on the cutting edge of this new technology. Panelists said AI has boosted their productivity by helping them write and debug code , which has freed up their time for higher-order problems, such as designing software and devising integration strategies. However, they emphasized that some of the basics of software engineering — learning programming languages, scaling models, and handling large-scale data — would remain important . Advertisement The roundtable participants also said developers could provide critical insight into challenges around AI ethics and governance . The roundtable participants were: Pooya Amini, software engineer, Meta. Greg Jennings, head of engineering for AI, Anaconda. Shruti Kapoor, lead member of technical staff, Slack. Aditi Mithal, software-development engineer, Amazon Q. Igor Ostrovsky, cofounder, Augment. Neeraj Verma, head of applied AI, Nice. Kesha Williams, head of enterprise architecture and engineering, Slalom. The following discussion was edited for length and clarity. Julia Hood: What has changed in your role since the popularization of gen AI? Advertisement Neeraj Verma: I think the expectations that are out there in the market for developers on the use of AI are actually almost a bigger impact than the AI itself. You hear about how generative AI is sort of solving this blank-paper syndrome. Humans have this concept that if you give them a blank paper and tell them to go write something, they'll be confused forever. And generative AI is helping overcome that. The expectation from executives now is that developers are going to be significantly faster but that some of the creative work the developers are doing is going to be taken away — which we're not necessarily seeing. We're seeing it as more of a boilerplate creation mechanism for efficiency gains. Aditi Mithal: I joined Amazon two years ago, and I've seen how my productivity has changed. I don't have to focus on doing repetitive tasks. I can just ask Amazon Q chat to do that for me, and I can focus on more-complex problems that can actually impact our stakeholders and our clients. I can focus on higher-order problems instead of more-repetitive tasks for which the code is already out there internally. Shruti Kapoor: One of the big things I've noticed with writing code is how open companies have become to AI tools like Cursor and Copilot and how integrated they've become into the software-development cycle. It's no longer considered a no-no to use AI tools like ChatGPT. I think two years ago when ChatGPT came out, it was a big concern that you should not be putting your code out there. But now companies have kind of embraced that within the software-development cycle. Advertisement Pooya Amini: Looking back at smartphones and Google Maps, it's hard to remember how the world looked like before these technologies. It's a similar situation with gen AI — I can't remember how I was solving the problem without it. I can focus more on actual work. Now I use AI as a kind of assisted tool. My main focus at work is on requirement gathering, like software design. When it comes to the coding, it's going to be very quick. Previously, it could take weeks. Now it's a matter of maybe one or two days, so then I can actually focus on other stuff as AI is solving the rest for me. Kesha Williams: In my role, it's been trying to help my team rethink their roles and not see AI as a threat but more as a partner that can help boost productivity, and encouraging my team to make use of some of the new embedded AI and gen-AI tools. Really helping my team upskill and putting learning paths in place so that people can embrace AI and not be afraid of it. More of the junior-level developers are really afraid about AI replacing them. Hood: Are there new career tracks opening up now that weren't here before? Advertisement Verma : At Nice, we have something like 3,000 developers, and over the last, I think, 24 months, 650 of them have shifted into AI-specific roles, which was sort of unheard of before. Even out of those 650, we've got about a hundred who are experts at things like prompt engineering. Over 20% of our developers are not just developers being supported by AI but developers using AI to write features. Kapoor: I think one of the biggest things I've noticed in the last two to three years is the rise of a job title called " AI engineer ," which did not exist before, and it's kind of in between an ML engineer and a traditional software engineer. I'm starting to see more and more companies where AI engineer is one of the top-paying jobs available for software engineers. One of the cool things about this job is that you don't need an ML-engineering background, which means it's accessible to a lot more people. Greg Jennings: For developers who are relatively new or code-literate knowledge workers, I think they can now use code to solve problems where previously they might not have. We have designers internally that are now creating full-blown interactive UIs using AI to describe what they want and then providing that to engineers. They've never been able to do that before, and it greatly accelerates the cycle. For more-experienced developers, I think there are a huge number of things that we still have to sort out: the architectures of these solutions, how we're actually going to implement them in practice. The nature of testing is going to have to change a lot as we start to include these applications in places where they're more mission-critical. Advertisement Amini: On the other side, looking at threats that can come out of AI, new technologies and new positions can emerge as well. We don't currently have clear regulations in terms of ownership or the issues related to gen AI, so I imagine there will be more positions in terms of ethics. Mithal: I feel like a Ph.D. is not a requirement anymore to be a software developer. If you have some foundational ML, NLP knowledge, you can target some of these ML-engineer or AI-engineer roles, which gives you a great opportunity to be in the market. Williams: I'm seeing new career paths in specialized fields around ML and LLM operations. For my developers, they're able to focus more on strategy and system design and creative problem-solving, and it seems to help them move faster into architecture. System design, system architecture, and integration strategies — they have more time to do that because of AI. Jean Paik: What skills will developers need to stay competitive? Advertisement Verma: I think a developer operating an AI system requires product-level understanding of what you're trying to build at a high level. And I think a lot of developers struggle with prompt engineering from that perspective. Having the skills to clearly articulate what you want to an LLM is a very important skill. Williams: Developers need to understand machine-learning concepts and how AI models work, not necessarily how to build and train these models from scratch but how to use them effectively. As we're starting to use Amazon Q, I've realized that our developers are now becoming prompt engineers because you have to get that prompt right in order to get the best results from your gen-AI system. Jennings: Understanding how to communicate with these models is very different. I almost think that it imparts a need for engineers to have a little bit more of a product lens, where a deeper understanding of the actual business problem they're trying to solve is necessary to get the most out of it. Developing evaluations that you can use to optimize those prompts, so going from prompt engineering to actually tuning the prompts in a more-automated way, is going to emerge as a more common approach. Igor Ostrovsky: Prompt engineering is really important. That's how you interact with AI systems, but this is something that's evolving very quickly . Software development will change in five years much more rapidly than anything we've seen before. How you architect, develop, test, and maintain software — that will all change, and how exactly you interact with AI will also evolve. I think prompt engineering is more of a sign that some developers have the desire to learn and are eager to figure out how to interact with artificial intelligence, but it won't necessarily be how you interact with AI in three years or five years. Software developers will need this desire to adapt and learn and have the ability to solve hard problems. Advertisement Mithal: As a software developer, some of the basics won't change. You need to understand how to scale models, build scalable solutions, and handle large-scale data. When you're training an AI model, you need data to support it. Kapoor: Knowledge of a programming language would be helpful, specifically Python or even JavaScript. Knowledge of ML or some familiarity with ML will be really helpful. Another thing is that we need to make sure our applications are a lot more fault-tolerant. That is also a skill that front-end or back-end engineers who want to transition to an AI-engineering role need to be aware of. One of the biggest problems with prompts is that the answers can be very unpredictable and can lead to a lot of different outputs, even for the same prompt. So being able to make your application fault-tolerant is one of the biggest skills we need to apply in AI engineering. Hood: What are the concerns and obstacles you have as AI gains momentum? How do you manage the expectations of nontech stakeholders in the organization who want to stay on the leading edge? Advertisement Ostrovsky: Part of the issue is that interacting with ChatGPT or cloud AI is so easy and natural that it can be surprising how hard it is actually to control AI behavior, where you need AI to understand constraints, have access to the right information at the right time, and understand the task. When setting expectations with stakeholders, it is important they understand that we're working with this very advanced technology and they are realistic about the risk profile of the project. Mithal: One is helping them understand the trade-offs. It could be security versus innovation or speed versus accuracy. The second is metrics. Is it actually improving the efficiency? How much is the acceptance rate for our given product? Communicating all those to the stakeholders gives them an idea of whether the product they're using is making an impact or if it's actually helping the team become more productive. Williams: Some of the challenges I'm seeing are mainly around ethical AI concerns, data privacy , and costly and resource-intensive models that go against budget and infrastructure constraints. On the vendor or stakeholder side, it's really more about educating our nontechnical stakeholders about the capabilities of AI and the limitations and trying to set realistic expectations. Advertisement We try to help our teams understand for their specific business area how AI can be applied. So how can we use AI in marketing or HR or legal, and giving them real-world use cases. Verma: Gen AI is really important, and it's so easy to use ChatGPT, but what we find is that gen AI makes a good developer better and a worse developer worse. Good developers understand how to write good code and how good code integrates into projects. ChatGPT is just another tool to help write some of the code that fits into the project. That's the big challenge that we try to make sure our executives understand, that not everybody can use this in the most effective manner. Jennings: There are some practical governance concerns that have emerged. One is understanding the tolerance for bad responses in certain contexts. Some problems, you may be more willing to accept a bad response because you structure the interface in such a way that there's a human in the loop. If you're attempting to not have a human in the loop, that could be problematic depending on what you want the model to do. Just getting better muscle for the organization to have a good intuition about where these models can potentially fail and in what ways. In addition to that, understanding what training data went into that model, especially as models are used more as agents and have privileged access to different applications and data sources that might be pretty sensitive. Advertisement Kapoor: I think one of the biggest challenges that can happen is how companies use the data that comes back from LLM models and how they're going to use it within the application. Removing the human component scares me a lot. Verma: It's automation versus augmentation. There are a lot of cases where augmentation is the big gain. I think automation is a very small, closed case — there are very few things I think LLMs are ready in the world right now to automate.

Previous: zebet
Next: bet 711