Search Immortality Topics:

Page 32«..1020..31323334..4050..»

Category Archives: Machine Learning

The confounding problem of garbage-in, garbage-out in ML – Mint

One of the top 10 trends in data and analytics this year as leaders navigate the covid-19 world, according to Gartner, is augmented data management." Its the growing use of tools with ML/AI to clean and prepare robust data for AI-based analytics. Companies are currently striving to go digital and derive insights from their data, but the roadblock is bad data, which leads to faulty decisions. In other words: garbage in, garbage out.

I was talking to a university dean the other day. It had 20,000 students in its database, but only 9,000 students had actually passed out of the university," says Deleep Murali, co-founder and CEO of Bengaluru-based Zscore. This kind of faulty data has a cascading effect because all kinds of decisions, including financial allocations, are based on it.

Zscore started out with the idea of providing AI-based business intelligence to global enterprises. But the startup soon ran into a bigger problem: the domino effect of unreliable data feeding AI engines. We realized we were barking up the wrong tree," says Murali. Then we pivoted to focus on automating data checks."

For example, an insurance company allocates a budget to cover 5,000 hospitals in its database but it turns out that one-third of them are duplicates with a slight alteration in name. So far in pilots weve run for insurance companies, we showed $35 million in savings, with just partial data. So its a huge problem," says Murali.


This is what prompted IBM chief Arvind Krishna to reveal that the top reason for its clients to halt or cancel AI projects was their data. He pointed out that 80% of an AI project involves collecting and cleansing data, but companies were reluctant to put in the effort and expense for it.

That was in the pre-covid era. Whats happening now is that a lot of companies are keen to accelerate their digital transformation. So customer traction is picking up from banks and insurance companies as well as the manufacturing sector," says Murali.

Data analytics tends to be on the fringes of a companys operations, rather than its core. Zscores product aims to change that by automating data flow and improving its quality. Use cases differ from industry to industry. For example, a huge drain on insurance companies is false claims, which can vary from absurdities like male pregnancies and braces for six-month-old toddlers to subtler cases like the same hospital receiving allocations under different names.

We work with a leading insurance company in Australia and claims leakage is its biggest source of loss. The moment you save anything in claims, it has a direct impact on revenue," says Murali. Male pregnancies and braces for six-month-olds seem like simple leaks but companies tend to ignore it. Legacy systems and rules havent accounted for all the possibilities. But now a claim comes to our system and multiple algorithms spot anything suspicious. Its a parallel system to the existing claims processing system."

For manufacturing companies, buggy inventory data means placing orders for things they dont need. For example, there can be 15 different serial numbers of spanners. So you might order a spanner thats well-stocked, whereas the ones really required dont show up. Companies lose 12-15% of their revenue each because of data issues such as duplicate or excessive inventory," says Murali.

These problems have got exacerbated in the age of AI where algorithms drive decision-making. Companies typically lack the expertise to prepare data in a way that is suitable for machine-learning models. How data is labelled and annotated plays a huge role. Hence, the need for supervised machine learning from tech companies like Zscore that can identify bad data and quarantine it.


Semantics and context analysis and studying manual processes help develop industry- or organization-specific solutions. So far 80-90% of data work has been manual. What we do is automate identification of data ingredients, data workflows and root cause analysis to understand whats wrong with the data," says Murali.

A couple of years ago, Zscore got into cloud data management multinational NetApps accelerator programme in Bengaluru. This gave it a foothold abroad with a NetApp client in Australia. It also opened the door to working with large financial institutions.

The Royal Commission of Australia, which is the equivalent of RBI, had come down hard on the top four banks and financial institutions for passing on faulty information. Its report said decisions had to be based on the right data and gave financial institutions 18 months to show progress. This became motivation for us because these were essentially data-oriented problems," says Murali.

Malavika Velayanikal is a consulting editor with Mint. She tweets @vmalu.

Subscribe to Mint Newsletters

* Enter a valid email

* Thank you for subscribing to our newsletter.

Follow this link:
The confounding problem of garbage-in, garbage-out in ML - Mint

Posted in Machine Learning | Comments Off on The confounding problem of garbage-in, garbage-out in ML – Mint

Is Wide-Spread Use of AI & Machine Intelligence in Manufacturing Still Years Away? – Automation World

According to a new report by PMMI Business Intelligence, artificial intelligence (AI) and machine learning is the area of automation technology with the greatest capacity for expansion. This technology can optimize individual processes and functions of the operation; manage production and maintenance schedules; and, expand and improve the functionality of existing technology such as vision inspection.

While AI is typically aimed at improving operation-wide efficiency, machine learning is directed more toward the actions of individual machines; learning during operation, identifying inefficiencies in areas such as rotation and movement, and then adjusting processes to correct for inefficiencies.

The advantages to be gained through the use of AI and machine learning are significant. One study released by Accenture and Frontier Economics found that by 2035, AI-empowered technology could increase labor productivity by up to 40%, creating an additional $3.8 trillion in direct value added (DVA) to the manufacturing sector.

See it Live at PACK EXPO Connects Nov. 9-13: End-of-Line Automation without Capital Expenditure, by Pearson Packaging Systems. Preview the Showroom Here.

However, only 1% of all manufacturers, both large and small, are currently utilizing some form of AI or machine learning in their operations. Most manufacturers interviewed said that they are trying to gain a better understanding of how to utilize this technology in their operations, and 45% of leading CPGs interviewed predict they will incorporate AI and/or machine learning within ten years.

A plant manager at a private label SME reiterates AI technology is still being explored, stating: We are only now talking about how to use AI and predict it will impact nearly half of our lines in the next 10 years.

While CPGs forecast that machine learning will gain momentum in the next decade, the near-future applications are likely to come in vision and inspection systems. Manufacturers can utilize both AI and machine learning in tandem, such as deploying sensors to key areas of the operation to gather continuous, real-time data on efficiency, which can then be analyzed by an AI program to identify potential tweaks and adjustments to improve the overall process.

See it Live at PACK EXPO Connects Nov. 9-13: Reduce costs and improve product quality in adhesive application of primary packaging, by Robatech USA Inc. Preview the Showroom Here.

And, the report states, that while these may appear to be expensive investments best left for the future, these technologies are increasingly affordable and offer solutions that can bring measurable efficiencies to smart manufacturing. In the days of COVID-19, gains to labor productivity and operational efficiency may be even more timely.

To access this FREE report and learn more about automation in operations, download below.

Source: PMMI Business Intelligence, Automation Timeline: The Drive Toward 4.0 Connectivity in Packaging and Processing

PACK EXPO Connects November 9-13. Now more than ever, packaging and processing professionals need solutions for a rapidly changing world, and the power of the PACK EXPO brand delivers the decision makers you need to reach. Attendeeregistrationis open now.

The rest is here:
Is Wide-Spread Use of AI & Machine Intelligence in Manufacturing Still Years Away? - Automation World

Posted in Machine Learning | Comments Off on Is Wide-Spread Use of AI & Machine Intelligence in Manufacturing Still Years Away? – Automation World

Why neural networks struggle with the Game of Life – TechTalks

This article is part of ourreviews of AI research papers, a series of posts that explore the latest findings in artificial intelligence.

The Game of Life is a grid-based automaton that is very popular in discussions about science, computation, and artificial intelligence. It is an interesting idea that shows how very simple rules can yield very complicated results.

Despite its simplicity, however, the Game of Life remains a challenge to artificial neural networks, AI researchers at Swarthmore College and the Los Alamos National Laboratory have shown in a recent paper. Titled, Its Hard for Neural Networks To Learn the Game of Life, their research investigates how neural networks explore the Game of Life and why they often miss finding the right solution.

Their findings highlight some of the key issues with deep learning models and give some interesting hints at what could be the next direction of research for the AI community.

British mathematician John Conway invented the Game of Life in 1970. Basically, the Game of Life tracks the on or off statethe lifeof a series of cells on a grid across timesteps. At each timestep, the following simple rules define which cells come to life or stay alive, and which cells die or stay dead:

Based on these four simple rules, you can adjust the initial state of your grid to create interesting stable, oscillating, and gliding patterns.

For instance, this is whats called the glider gun.

You can also use the Game of Life to create very complex pattern, such as this one.

Interestingly, no matter how complex a grid becomes, you can predict the state of each cell in the next timestep with the same rules.

With neural networks being very good prediction machines, the researchers wanted to find out whether deep learning models could learn the underlying rules of the Game of Life.

There are a few reasons the Game of Life is an interesting experiment for neural networks. We already know a solution, Jacob Springer, a computer science student at Swarthmore College and co-author of the paper, told TechTalks. We can write down by hand a neural network that implements the Game of Life, and therefore we can compare the learned solutions to our hand-crafted one. This is not the case in.

It is also very easy to adjust the flexibility of the problem in the Game of Life by modifying the number of timesteps in the future the target deep learning model must predict.

Also, unlike domains such as computer vision or natural language processing, if a neural network has learned the rules of the Game of Life it will reach 100 percent accuracy. Theres no ambiguity. If the network fails even once, then it is has not correctly learned the rules, Springer says.

In their work, the researchers first created a small convolutional neural network and manually tuned its parameters to be able to predict the sequence of changes in the Game of Lifes grid cells. This proved that theres a minimal neural network that can represent the rule of the Game of Life.

Then, they tried to see if the same neural network could reach optimal settings when trained from scratch. They initialized the parameters to random values and trained the neural network on 1 million randomly generated examples of the Game of Life. The only way the neural network could reach 100 percent accuracy would be to converge on the hand-crafted parameter values. This would imply that the AI model had managed to parameterize the rules underlying the Game of Life.

But in most cases the trained neural network did not find the optimal solution, and the performance of the network decreased even further as the number of steps increased. The result of training the neural network was largely affected by the chosen set training examples as well as the initial parameters.

Unfortunately, you never know what the initial weights of the neural network should be. The most common practice is to pick random values from a normal distribution, therefore settling on the right initial weights becomes a game of luck. As for the training dataset, in many cases, it isnt clear which samples are the right ones, and in others, theres not much of a choice.

For many problems, you dont have a lot of choice in dataset; you get the data that you can collect, so if there is a problem with your dataset, you may have trouble training the neural network, Springer says.

In machine learning, one of the popular ways to improve the accuracy of a model that is underperforming is to increase its complexity. And this technique worked with the Game of Life. As the researchers added more layers and parameters to the neural network, the results improved and the training process eventually yielded a solution that reached near-perfect accuracy.

But a larger neural network also means an increase in the cost of training and running the deep learning model.

On the one hand, this shows the flexibility of large neural networks. Although a huge deep learning model might not be the most optimal architecture to address your problem, it has a greater chance of finding a good solution. But on the other, it proves that there is likely to be a smaller deep learning model that can provide the same or better resultsif you can find it.

These findings are in line with The Lottery Ticket Hypothesis, presented at the ICLR 2019 conference by AI researchers at MIT CSAIL. The hypothesis suggested that for each large neural network, there are smaller sub-networks that can converge on a solution if their parameters have been initialized on lucky, winning values, thus the lottery ticket nomenclature.

The lottery ticket hypothesis proposes that when training a convolutional neural network, small lucky subnetworks quickly converge on a solution, the authors of the Game of Life paper write. This suggests that rather than searching extensively through weight-space for an optimal solution, gradient-descent optimization may rely on lucky initializations of weights that happen to position a subnetwork close to a reasonable local minima to which the network converges.

While Conways Game of Life itself is a toy problem and has few direct applications, the results we report here have implications for similar tasks in which a neural network is trained to predict an outcome which requires the network to follow a set of local rules with multiple hidden steps, the AI researchers write in their paper.

These findings can apply to machine learning models used logic or math solvers, weather and fluid dynamics simulations, and logical deduction in language or image processing.

Given the difficulty that we have found for small neural networks to learn the Game of Life, which can be expressed with relatively simple symbolic rules, I would expect that most sophisticated symbol manipulation would be even more difficult for neural networks to learn, and would require even larger neural networks, Springer said. Our result does not necessarily suggest that neural networks cannot learn and execute symbolic rules to make decisions, however, it suggests that these types of systems may be very difficult to learn, especially as the complexity of the problem increases.

The researchers further believe that their findings apply to other fields of machine learning that do not necessarily rely on clear-cut logical rules, such as image and audio classification.

For the moment, we know that, in some cases, increasing the size and complexity of our neural networks can solve the problem of poorly performing deep learning models. But we should also consider the negative impact of using larger neural networks as the go-to method to overcome impasses in machine learning research. One outcome can be greater energy consumption and carbon emissions caused from the compute resources required to train large neural networks. On the other hand, it can result in the collection of larger training datasets instead of relying on finding ideal distribution strategies across smaller datasets, which might not be feasible in domains where data is subject to ethical considerations and privacy laws. And finally, the general trend toward endorsing overcomplete and very large deep learning models can consolidate AI power in large tech companies and make it harder for smaller players to enter the deep learning research space.

We hope that this paper will promote research into the limitations of neural networks so that we can better understand the flaws that necessitate overcomplete networks for learning. We hope that our result will drive development into better learning algorithms that do not face the drawbacks of gradient-based learning, the authors of the paper write.

I think the results certainly motivate research into improved search algorithms, or for methods to improve the efficiency of large networks, Springer said.

Read more here:
Why neural networks struggle with the Game of Life - TechTalks

Posted in Machine Learning | Comments Off on Why neural networks struggle with the Game of Life – TechTalks

Algorithms may never really figure us out thank goodness – The Boston Globe

An unlikely scandal engulfed the British government last month. After COVID-19 forced the government to cancel the A-level exams that help determine university admission, the British education regulator used an algorithm to predict what score each student would have received on their exam. The algorithm relied in part on how the schools students had historically fared on the exam. Schools with richer children tended to have better track records, so the algorithm gave affluent students even those on track for the same grades as poor students much higher predicted scores. High-achieving, low-income pupils whose schools had not previously performed well were hit particularly hard. After threats of legal action and widespread demonstrations, the government backed down and scrapped the algorithmic grading process entirely. This wasnt an isolated incident: In the United States, similar issues plagued the International Baccalaureate exam, which used an opaque artificial intelligence system to set students' scores, prompting protests from thousands of students and parents.

These episodes highlight some of the pitfalls of algorithmic decision-making. As technology advances, companies, governments, and other organizations are increasingly relying on algorithms to predict important social outcomes, using them to allocate jobs, forecast crime, and even try to prevent child abuse. These technologies promise to increase efficiency, enable more targeted policy interventions, and eliminate human imperfections from decision-making processes. But critics worry that opaque machine learning systems will in fact reflect and further perpetuate shortcomings in how organizations typically function including by entrenching the racial, class, and gender biases of the societies that develop these systems. When courts and parole boards have used algorithms to forecast criminal behavior, for example, they have inaccurately identified Black defendants as future criminals more often than their white counterparts. Predictive policing systems, meanwhile, have led the police to unfairly target neighborhoods with a high proportion of non-white people, regardless of the true crime rate in those areas. Companies that have used recruitment algorithms have found that they amplify bias against women.

But there is an even more basic concern about algorithmic decision-making. Even in the absence of systematic class or racial bias, what if algorithms struggle to make even remotely accurate predictions about the trajectories of individuals' lives? That concern gains new support in a recent paper published in the Proceedings of the National Academy of Sciences. The paper describes a challenge, organized by a group of sociologists at Princeton University, involving 160 research teams from universities across the country and hundreds of researchers in total, including one of the authors of this article. These teams were tasked with analyzing data from the Fragile Families and Child Wellbeing Study, an ongoing study that measures various life outcomes for thousands of families who gave birth to children in large American cities around 2000. It is one of the richest data sets available to researchers: It tracks thousands of families over time, and has been used in more than 750 scientific papers.

The task for the teams was simple. They were given access to almost all of this data and asked to predict several important life outcomes for a sample of families. Those outcomes included the childs grade point average, their grit (a commonly used measure of passion and perseverance), whether the household would be evicted, the material hardship of the household, and whether the parent would lose their job.

The teams could draw on almost 13,000 predictor variables for each family, covering areas such as education, employment, income, family relationships, environmental factors, and child health and development. The researchers were also given access to the outcomes for half of the sample, and they could use this data to hone advanced machine-learning algorithms to predict each of the outcomes for the other half of the sample, which the organizers withheld. At the end of the challenge, the organizers scored the 160 submissions based on how well the algorithms predicted what actually happened in these peoples lives.

The results were disappointing. Even the best performing prediction models were only marginally better than random guesses. The models were rarely able to predict a students GPA, for example, and they were even worse at predicting whether a family would get evicted, experience unemployment, or face material hardship. And the models gave almost no insight into how resilient a child would become.

In other words, even having access to incredibly detailed data and modern machine learning methods designed for prediction did not enable the researchers to make accurate forecasts. The results of the Fragile Families Challenge, the authors conclude, with notable understatement, raise questions about the absolute level of predictive performance that is possible for some life outcomes, even with a rich data set.

Of course, machine learning systems may be much more accurate in other domains; this paper studied the predictability of life outcomes in only one setting. But the failure to make accurate predictions cannot be blamed on the failings of any particular analyst or method. Hundreds of researchers attempted the challenge, using a wide range of statistical techniques, and they all failed.

These findings suggest that we should doubt that big data can ever perfectly predict human behavior and that policymakers working in criminal justice policy and child-protective services should be especially cautious. Even with detailed data and sophisticated prediction techniques, there may be fundamental limitations on researchers' ability to make accurate predictions. Human behavior is inherently unpredictable, social systems are complex, and the actions of individuals often defy expectations.

And yet disappointing as this may be for technocrats and data scientists, it also suggests something reassuring about human potential. If life outcomes are not firmly pre-determined if an algorithm, given a set of past data points, cannot predict a persons trajectory then the algorithms limitations ultimately reflect the richness of humanitys possibilities.

Bryan Schonfeld and Sam Winter-Levy are PhD candidates in politics at Princeton University.

Visit link:
Algorithms may never really figure us out thank goodness - The Boston Globe

Posted in Machine Learning | Comments Off on Algorithms may never really figure us out thank goodness – The Boston Globe

Six notable benefits of AI in finance, and what they mean for humans – Daily Maverick

Addressing AI anxiety

A common narrative around emerging technologies like AI, machine learning, and robotic process automation is the anxiety and fear that theyll replace humans. In South Africa, with an unemployment rate of over 30%, these concerns are valid.

But if we dig deep into what we can do with AI, we learn it will elevate the work that humans do, making it more valuable than ever.

Sage research found that most senior financial decision-makers (90%) are comfortable with automation performing more of their day-to-day accounting tasks in the future, and 40% believe that AI and machine learning (ML) will improve forecasting and financial planning.

Whats more, two-thirds of respondents expect emerging technology to audit results continuously and to automate period-end reporting and corporate audits, reducing time to close in the process.

The key to realising these benefits is to secure buy-in from the entire organisation. With 87% of CFOs now playing a hands-on role in digital transformation, their perspective on technology is key to creating a digitally receptive team culture. And their leadership is vital in ensuring their organisations maximise their technology investments. Until employees make the same mindset shift as CFOs have, theyll need to be guided and reassured about the businesss automation strategy and the potential for upskilling.

Six benefits of AI in laymans terms

Speaking during an exclusive virtual event to announce the results of the CFO 3.0 research, as well as the launch of Sage Intacct in South Africa, Aaron Harris, CTO for the Sage, said one reason for the misperception about AIs impact on business and labour is that SaaS companies too often speak in technical jargon.

We talk about AI and machine learning as if theyre these magical capabilities, but we dont actually explain what they do and what problems they solve. We dont put it into terms that matter for business leaders and labour. We dont do a good job as an industry, explaining that machine learning isnt an outcome we should be looking to achieve its the technology that enables business outcomes, like efficiency gains and smarter predictive analytics.

For Harris, AI has remarkable benefits in six key areas:

Digital culture champions

Evolving from a traditional management style that relied on intuition, to a more contemporary one based on data-driven evidence, can be a culturally disruptive process. Interestingly, driving a cultural change wasnt a concern for most South African CFOs, with 73% saying their organisations are ready for more automation.

In fact, AI holds no fear for senior financial decision-makers: over two-thirds are not at all concerned about it, and only one in 10 believe that it will take away jobs.

So, how can businesses reimagine the work of humans when software bots are taking care of all the repetitive work?

How can we leverage the unique skills of humans, like collaboration, contextual understanding, and empathy?

The future world is a world of connections, says Harris. It will be about connecting humans in ways that allow them to work at a higher level. It will be about connecting businesses across their ecosystems so that they can implement digital business models to effectively and competitively operate in their markets. And it will be about creating connections across technology so that traditional, monolithic experiences are replaced with modern ones that reflect new ways of working and that are tailored to how individuals and humans will be most effective in this world.

New world of work

We can envision this world across three areas:

Sharing knowledge and timelines on strategic developments and explaining the significance of these changes will help CFOs to alleviate the fear of the unknown.

Technology may be the enabler driving this change, but how it transforms a business lies with those who are bold enough to take the lead. DM

Continue reading here:
Six notable benefits of AI in finance, and what they mean for humans - Daily Maverick

Posted in Machine Learning | Comments Off on Six notable benefits of AI in finance, and what they mean for humans – Daily Maverick

Why Deep Learning DevCon Comes At The Right Time – Analytics India Magazine

The Association of Data Scientists (ADaSci) recently announced Deep Learning DEVCON or DLDC 2020, a two-day virtual conference that aims to bring machine learning and deep learning practitioners and experts from the industry on a single platform to share and discuss recent developments in the field.

Scheduled for 29th and 30th October, the conference comes at a time when deep learning, a subset of machine learning, has become one of the most advancing technologies in the world. From being used in the fields of natural language processing to making self-driving cars, it has come a long way. As a matter of fact, reports suggest that by 2024, the deep learning market is expected to grow at a CAGR of 25%. Thus, it can easily be established that the advancements in the field of deep learning have just initiated and got a long road ahead.

Also Read: Top 7 Upcoming Deep Learning Conferences To Watch Out For

Being a crucial subset of artificial intelligence and machine learning, the advancements in deep learning have increased over the last few years. Thus, it has been explored in various industries, starting from healthcare and eCommerce to advertising and finance, by many leading firms as well as startups across the globe.

While companies like Waymo and Google are using deep learning for their self-driving vehicles, Apple is using the technology for its voice assistant Siri. Alongside many are using deep learning automatic text generation, handwriting recognition, relevant caption generation, image colourisation, predicting earthquakes as well as for detecting brain cancers.

In recent news, Microsoft has introduced new advancements in their deep learning optimisation library DeepSpeed to enable next-gen AI capabilities at scale. It can now be used to train language models with one trillion parameters with fewer GPUs.

With that being said, in future, it is expected to see an increased adoption machine translation, customer experience, content creation, image data augmentation, 3D printing and more. A lot of it could be attributed to the significant advancements in hardware space as well as the democratisation of technology, which helped the field in gaining traction.

Also Read: Free Online Resources To Get Hands-On Deep Learning

Many researchers and scientists across the globe have been working with deep learning technology to leverage it in fighting the deadly pandemic COVID-19. In fact, in recent news, some researchers have proposed deep learning-based automated CT image analysis tools that can differentiate COVID patients from the ones which arent infected. In another research, scientists have proposed a fully automatic deep learning system for diagnosing the disease as well as prognostic analysis. Many are also using deep neural networks for analysing X-ray images to diagnose COVID-19 among patients.

Along with these, startups like Zeotap, SilverSparro and Brainalyzed are leveraging the technology to either drive growth in customer intelligence or power industrial automation and AI solutions. With such solutions, these startups are making deep learning technology more accessible to enterprises and individuals.

Also Read: 3 Common Challenges That Deep Learning Faces In Medical Imaging

Companies like Shell, Lenskart, Snaphunt, Baker Hughes, McAfee, Lowes, L&T and Microsoft are looking for data scientists who are equipped with deep learning knowledge. With significant advancements in this field, it has now become the hottest skill that companies are looking for in their data scientists.

Consequently looking at these requirements, many edtech companies have started coming up with free online resources as well as paid certification on deep learning to provide industry-relevant knowledge to enthusiasts and professionals. These courses and accreditation, in turn, bridges the major talent gap that emerging technologies typically face during its maturation.

Also Read: How To Switch Careers To Deep Learning

With such major advancements in the field and its increasing use cases, the area of deep learning has witnessed an upsurge in popularity as well as demand. Thus it is critical, now more than ever, to understand this complex subject in-depth for better research purposes and application. For that matter, one needs to have a thorough understanding of the field to build a career in this ever-evolving field.

And, for this reason, the Deep Learning DEVCON couldnt have come at a better time than this. Not only it will help amateurs as well as professionals to get a better understanding of the field but will also provide them opportunities to network with leading developers and experts of the field.

Further, the talks and the workshops included in the event will provide a hands-on experience for deep learning practitioners on various tools and techniques. Starting with machine learning vs deep learning, followed by feed-forward neural networks and deep neural networks, the workshops will cover topics like GANs, recurrent neural networks, sequence modelling, Autoencoders, and real-time object detection. The two-day workshop will also provide an overview of deep learning as a broad topic, which will further be accredited with a certificate for all the attendees of the workshop.

The workshops will help participants have a strong understanding of deep learning, from basics to advanced, along with in-depth knowledge of artificial neural networks. With that, it will also clear concepts about tuning, regularising and improving the models as well as an understanding of various building blocks with their practical implementations. Alongside, it will also provide practical knowledge of applying deep learning in computer vision and NLP.

Considering the conference is virtual, it will also provide convenience for participants to join the talks and workshops from the comfort of their homes. Thus, a perfect opportunity to get a first-hand experience into the complex world of deep learning along with leading experts and best minds of the field, who will share their relevant experience to encourage enthusiasts and amateurs.

To register for Deep Learning DevCon 2020, visit here.


Continued here:
Why Deep Learning DevCon Comes At The Right Time - Analytics India Magazine

Posted in Machine Learning | Comments Off on Why Deep Learning DevCon Comes At The Right Time – Analytics India Magazine