Customize Consent Preferences

We use cookies to help you navigate efficiently and perform certain functions. You will find detailed information about all cookies under each consent category below.

The cookies that are categorized as "Necessary" are stored on your browser as they are essential for enabling the basic functionalities of the site. ... 

Always Active

Necessary cookies are required to enable the basic features of this site, such as providing secure log-in or adjusting your consent preferences. These cookies do not store any personally identifiable data.

No cookies to display.

Functional cookies help perform certain functionalities like sharing the content of the website on social media platforms, collecting feedback, and other third-party features.

No cookies to display.

Analytical cookies are used to understand how visitors interact with the website. These cookies help provide information on metrics such as the number of visitors, bounce rate, traffic source, etc.

No cookies to display.

Performance cookies are used to understand and analyze the key performance indexes of the website which helps in delivering a better user experience for the visitors.

No cookies to display.

Advertisement cookies are used to provide visitors with customized advertisements based on the pages you visited previously and to analyze the effectiveness of the ad campaigns.

No cookies to display.

BUSINESS TECH | Risks and rewards of AI in financial planning

0

According to a survey, more than half of wealth management firms have an AI project underway already.

business analytics financial concept

business analytics financial concept, worldwide company trading, international finance network and candlestick chart, fintech

Artificial intelligence (AI) has spread to nearly every industry as companies and consumers both aim to leverage its efficiencies of scale. Tasks like data analysis, transcription, customer support, and everything in between can be performed using AI to reduce time to results by orders of magnitude. Financial planning is no exception.

According to a survey from F2 Strategy, more than half of wealth management firms have an AI project underway already. They’re interested in predictive analytics on market conditions and securities changes over time, optical character recognition to analyze documents, workflow automation, chatbots, and more. The potential is clear – AI can reduce the human time spent on these tasks by as much as 90%. At the same time, more than 60% of firms say they need more education on AI. So while the upside is undeniable, the value to risk ratio is less clear.
This dynamic is particularly important for financial planning, where the stakes are higher – families’ and individuals’ money is directly at risk. While bespoke wealth management services typically cater to higher-net worth individuals, AI makes it possible to offer these services to a broader group of people. Advisors can develop customer profiles and deliver personalized plans based on age, assets, risks, goals, and needs in a fraction of the time, which means firms can offer it to more people. This represents a new market for wealth managers, but also a larger risk pool.

We must always remember that threat actors are using AI too. It offers attackers the same exact benefits – it’s a force multiplier that allows them to increase the scale and effectiveness of their campaigns. They can even poison the AI model itself to reveal sensitive information or deliver malicious results. Moreover, employees who are not adequately trained can inadvertently expose sensitive information through the information they input into AI tools, which subsequently incorporate it into their machine learning activities. We’ve already seen instances of this invalidating intellectual property claims.

Security controls therefore have to be integrated into the entire AI lifecycle, including employee training. Before using any AI tool, organizations must understand the privacy classification of all the data that might be input, the source of the data used to train the AI tools, and the specifics of the security protocols in place to protect sensitive information. This must be part of the AI rollout from day one. Open AI systems carry even more risk, as they’re designed to be accessible to the public, which enables them to learn from a much larger dataset, but also allows manipulation by bad actors. Closed systems are more secure, but require more hands-on management and model training. Employees should be given in-depth training about the tool and how it works, and how to use it safely – emphasizing which data can be used and which should never be exposed to a large language model (LLM) like the kind that power generative AI applications.

When implementing an AI-based solution, it’s important to identify the scope of the tool and restrict its data access to what’s absolutely necessary to train it. Develop a comprehensive understanding of the privacy of the information, the source of the model’s data, and the native security mechanisms built in. Many AI-tools have built-in defenses to protect against unethical use – a good example are ChatGPT’s rules that seek to prevent people from using it for nefarious purposes, like building malware. However, it’s also clear that these rules can be bypassed through cleverly worded prompts that obscure the intent of the user. This is one type of prompt injection attack, which is a category of threats unique to AI-based systems. Strong controls must be in place to prevent these attacks before they happen. Broadly, these controls fall under the scope of zero trust cybersecurity strategies.

AI tools, especially the LLMs that enable generative AI, should not be treated as a typical software tool. They are more like a hybrid between a tool and a user. Zero trust programs limit access to resources based on a person’s individual job function, scope, and needs. This limits the damage an attacker can do by compromising a single employee, because it limits the range of lateral movement. We have to remember that adding any software tool also increases the attack surface by offering more entry points to an attacker. Compromising a tool – like an AI tool – that has unlimited access to personally identifiable information, company secrets, proprietary tools, strategic forecasting, competitive analysis, and more could be catastrophic. Preventing this kind of a breach must be at the forefront of the strategy-level discussion to implement AI tools from the very beginning. After a cyber security incident, it’s often too late.

While most AI tools come with built-in security, organizations must take care to tailor these to their specific needs. They must also go beyond them. Despite similarities, each organization will have unique use cases, and calibrating defense to match these dynamics is table stakes for cybersecurity in 2024. The cost of cybercrime reached $8 trillion in 2023, according to a report from Cybersecurity Ventures. Clearly, this isn’t a niche threat. It can reasonably be considered among the primary threats every business faces today, and proactive security is therefore a foundation for doing business at all.

When we talk about AI, security is even more important. AI won’t replace financial advisors, but it will take the industry to its next stage of evolution, and that means new threats. The scale of the models and the data they ingest expand the attack surface exponentially, and just one breach can negate any and all gains a company makes leveraging AI. Cyber security analysis and control, under a zero trust model, is indispensable for unlocking the full potential of any AI-based tool.

WATCH: TECHSABADO and ‘TODAY IS TUESDAY’ LIVESTREAM on YOUTUBE

WATCH OUR OTHER YOUTUBE CHANNELS:

PLEASE LIKE our FACEBOOK PAGE and SUBSCRIBE to OUR YOUTUBE CHANNEL.

autoceremony >> experimental sound, synths, retro tech, shortwave

RACKET MUSIC GROUP >> alternative manila

Burning Chrome >>RC, die-cast cars, vintage anime, plus other collectibles

Zero Interrupt >>Vintage gadgets, gear and gizmos, plus some new one too!

PLEASE LIKE our FACEBOOK PAGE and SUBSCRIBE to OUR YOUTUBE CHANNEL.

contributed article shared by Check Point Software

Leave a Reply

Your email address will not be published. Required fields are marked *