Search Immortality Topics:

Page 80«..1020..79808182..90100..»


Category Archives: Machine Learning

Deep learning’s role in the evolution of machine learning – TechTarget

Machine learning had a rich history long before deep learning reached fever pitch. Researchers and vendors were using machine learning algorithms to develop a variety of models for improving statistics, recognizing speech, predicting risk and other applications.

While many of the machine learning algorithms developed over the decades are still in use today, deep learning -- a form of machine learning based on multilayered neural networks -- catalyzed a renewed interest in AI and inspired the development of better tools, processes and infrastructure for all types of machine learning.

Here, we trace the significance of deep learning in the evolution of machine learning, as interpreted by people active in the field today.

The story of machine learning starts in 1943 when neurophysiologist Warren McCulloch and mathematician Walter Pitts introduced a mathematical model of a neural network. The field gathered steam in 1956 at a summer conference on the campus of Dartmouth College. There, 10 researchers came together for six weeks to lay the ground for a new field that involved neural networks, automata theory and symbolic reasoning.

The distinguished group, many of whom would go on to make seminal contributions to this new field, gave it the name artificial intelligence to distinguish it from cybernetics, a competing area of research focused on control systems. In some ways these two fields are now starting to converge with the growth of IoT, but that is a topic for another day.

Early neural networks were not particularly useful -- nor deep. Perceptrons, the single-layered neural networks in use then, could only learn linearly separable patterns. Interest in them waned after Marvin Minsky and Seymour Papert published the book Perceptrons in 1969, highlighting the limitations of existing neural network algorithms and causing the emphasis in AI research to shift.

"There was a massive focus on symbolic systems through the '70s, perhaps because of the idea that perceptrons were limited in what they could learn," said Sanmay Das, associate professor of computer science and engineering at Washington University in St. Louis and chair of the Association for Computing Machinery's special interest group on AI.

The 1973 publication of Pattern Classification and Scene Analysis by Richard Duda and Peter Hart introduced other types of machine learning algorithms, reinforcing the shift away from neural nets. A decade later, Machine Learning: An Artificial Intelligence Approach by Ryszard S. Michalski, Jaime G. Carbonell and Tom M. Mitchell further defined machine learning as a domain driven largely by the symbolic approach.

"That catalyzed a whole field of more symbolic approaches to [machine learning] that helped frame the field. This led to many Ph.D. theses, new journals in machine learning, a new academic conference, and even helped to create new laboratories like the NASA Ames AI Research branch, where I was deputy chief in the 1990s," said Monte Zweben, CEO of Splice Machine, a scale-out SQL platform.

In the 1990s, the evolution of machine learning made a turn. Driven by the rise of the internet and increase in the availability of usable data, the field began to shift from a knowledge-driven approach to a data-driven approach, paving the way for the machine learning models that we see today.

The turn toward data-driven machine learning in the 1990s was built on research done by Geoffrey Hinton at the University of Toronto in the mid-1980s. Hinton and his team demonstrated the ability to use backpropagation to build deeper neural networks.

"This was a major breakthrough enabling new kinds of pattern recognition that were previously not feasible with neural nets," Zweben said. This added new layers to the networks and a way to strengthen or weaken connections back across many layers in the network, leading to the term deep learning.

Although possible in a lab setting, deep learning did not immediately find its way into practical applications, and progress stalled.

"Through the '90s and '00s, a joke used to be that 'neural networks are the second-best learning algorithm for any problem,'" Washington University's Das said.

Meanwhile, commercial interest in AI was starting to wane because the hype around developing an AI on par with human intelligence had gotten ahead of results, leading to an AI winter, which lasted through the 1980s. What did gain momentum was a type of machine learning using kernel methods and decision trees that enabled practical commercial applications.

Still, the field of deep learning was not completely in retreat. In addition to the ascendancy of the internet and increase in available data, another factor proved to be an accelerant for neural nets, according to Zweben: namely, distributed computing.

Machine learning requires a lot of compute. In the early days, researchers had to keep their problems small or gain access to expensive supercomputers, Zweben said. The democratization of distributed computing in the early 2000s enabled researchers to run calculations across clusters of relatively low-cost commodity computers.

"Now, it is relatively cheap and easy to experiment with hundreds of models to find the best combination of data features, parameters and algorithms," Zweben said. The industry is pushing this democratization even further with practices and associated tools for machine learning operations that bring DevOps principles to machine learning deployment, he added.

Machine learning is also only as good as the data it is trained on, and if data sets are small, it is harder for the models to infer patterns. As the data created by mobile, social media, IoT and digital customer interactions grew, it provided the training material deep learning techniques needed to mature.

By 2012, deep learning attained star status after Hinton's team won ImageNet, a popular data science challenge, for their work on classifying images using neural networks. Things really accelerated after Google subsequently demonstrated an approach to scaling up deep learning across clusters of distributed computers.

"The last decade has been the decade of neural networks, largely because of the confluence of the data and computational power necessary for good training and the adaptation of algorithms and architectures necessary to make things work," Das said.

Even when deep neural networks are not used directly, they indirectly drove -- and continue to drive -- fundamental changes in the field of machine learning, including the following:

Deep learning's predictive power has inspired data scientists to think about different ways of framing problems that come up in other types of machine learning.

"There are many problems that we didn't think of as prediction problems that people have reformulated as prediction problems -- language, vision, etc. -- and many of the gains in those tasks have been possible because of this reformulation," said Nicholas Mattei, assistant professor of computer science at Tulane University and vice chair of the Association for Computing Machinery's special interest group on AI.

In language processing, for example, a lot of the focus has moved toward predicting what comes next in the text. In computer vision as well, many problems have been reformulated so that, instead of trying to understand geometry, the algorithms are predicting labels of different parts of an image.

The power of big data and deep learning is changing how models are built. Human analysis and insights are being replaced by raw compute power.

"Now, it seems that a lot of the time we have substituted big databases, lots of GPUs, and lots and lots of machine time to replace the deep problem introspection needed to craft features for more classic machine learning methods, such as SVM [support vector machine] and Bayes," Mattei said, referring to the Bayesian networks used for modeling the probabilities between observations and outcomes.

The art of crafting a machine learning problem has been taken over by advanced algorithms and the millions of hours of CPU time baked into pretrained models so data scientists can focus on other projects or spend more time on customizing models.

Deep learning is also helping data scientists solve problems with smaller data sets and to solve problems in cases where the data has not been labeled.

"One of the most relevant developments in recent times has been the improved use of data, whether in the form of self-supervised learning, improved data augmentation, generalization of pretraining tasks or contrastive learning," said Juan Jos Lpez Murphy, AI and big data tech director lead at Globant, an IT consultancy.

These techniques reduce the need for manually tagged and processed data. This is enabling researchers to build large models that can capture complex relationships representing the nature of the data and not just the relationships representing the task at hand. Lpez Murphy is starting to see transfer learning being adopted as a baseline approach, where researchers can start with a pretrained model that only requires a small amount of customization to provide good performance on many common tasks.

There are specific fields where deep learning provides a lot of value, in image, speech and natural language processing, for example, as well as time series forecasting.

"The broader field of machine learning is enhanced by deep learning and its ability to bring context to intelligence. Deep learning also improves [machine learning's] ability to learn nonlinear relationships and manage dimensionality with systems like autoencoders," said Luke Taylor, founder and COO at TrafficGuard, an ad fraud protection service.

For example, deep learning can find more efficient ways to auto encode the raw text of characters and words into vectors representing the similarity and differences of words, which can improve the efficiency of the machine learning algorithms used to process it. Deep learning algorithms that can recognize people in pictures make it easier to use other algorithms that find associations between people.

More recently, there have been significant jumps using deep learning to improve the use of image, text and speech processing through common interfaces. People are accustomed to speaking to virtual assistants on their smartphones and using facial recognition to unlock devices and identify friends in social media.

"This broader adoption creates more data, enables more machine learning refinement and increases the utility of machine learning even further, pushing even further adoption of this tech into people's lives," Taylor said.

Early machine learning research required expensive software licenses. But deep learning pioneers began open sourcing some of the most powerful tools, which has set a precedent for all types of machine learning.

"Earlier, machine learning algorithms were bundled and sold under a licensed tool. But, nowadays, open source libraries are available for any type of AI applications, which makes the learning curve easy," said Sachin Vyas, vice president of data, AI and automation products at LTI, an IT consultancy.

Another factor in democratizing access to machine learning tools has been the rise of Python.

"The wave of open source frameworks for deep learning cemented the prevalence of Python and its data ecosystem for research, development and even production," Globant's Lpez Murphy said.

Many of the different commercial and free options got replaced, integrated or connected to a Python layer for widespread use. As a result, Python has become the de facto lingua franca for machine learning development.

Deep learning has also inspired the open source community to automate and simplify other aspects of the machine learning development lifecycle. "Thanks to things like graphical user interfaces and [automated machine learning], creating working machine learning models is no longer limited to Ph.D. data scientists," Carmen Fontana, IEEE member and cloud and emerging tech practice lead at Centric Consulting, said.

For machine learning to keep evolving, enterprises will need to find a balance between developing better applications and respecting privacy.

Data scientists will need to be more proactive in understanding where their data comes from and the biases that may inadvertently be baked into it, as well as develop algorithms that are transparent and interpretable. They also need to keep pace with new machine learning protocols and the different ways these can be woven together with various data sources to improve applications and decisions.

"Machine learning provides more innovative applications for end users, but unless we're choosing the right data sets and advancing deep learning protocols, machine learning will never make the transition from computing a few results to providing actual intelligence," said Justin Richie, director of data science at Nerdery, an IT consultancy.

"It will be interesting to see how this plays out in different industries and if this progress will continue even as data privacy becomes more stringent," Richie said.

Originally posted here:
Deep learning's role in the evolution of machine learning - TechTarget

Posted in Machine Learning | Comments Off on Deep learning’s role in the evolution of machine learning – TechTarget

My Invisalign app uses machine learning and facial recognition to sell the benefits of dental work – TechRepublic

Align Technology uses DevSecOps tactics to keep complex projects on track and align business and IT goals.

Image: AndreyPopov/Getty Images/iStockphoto

Align Technology's Chief Digital Officer Sreelakshmi Kolli is using machine learning and DevOps tactics to power the company's digital transformation.

Kolli led the cross-functional team that developed the latest version of the company's My Invisalign app. The app combines several technologies into one product including virtual reality, facial recognition, and machine learning. Kolli said that using a DevOps approach helped to keep this complex work on track.

"The feasibility and proof of concept phase gives us an understanding of how the technology drives revenue and/or customer experience," she said. "Modular architecture and microservices allows incremental feature delivery that reduces risk and allows for continuous delivery of innovation."

SEE: Research: Microservices bring faster application delivery and greater flexibility to enterprises (TechRepublic Premium)

The customer-facing app accomplishes several goals at once, the company said:

More than 7.5 million people have used the clear plastic molds to straighten their teeth, the company said. Align Technology has used data from these patients to train a machine learning algorithm that powers the visualization feature in the mobile app. The SmileView feature uses machine learning to predict what a person's smile will look like when the braces come off.

Kolli started with Align Technology as a software engineer in 2003. Now she leads an integrated software engineering group focused on product technology strategy and development of global consumer, customer and enterprise applications and infrastructure. This includes end user and cloud computing, voice and data networks and storage. She also led the company's global business transformation initiative to deliver platforms to support customer experience and to simplify business processes.

Kolli used the development process of the My Invisalign app as an opportunity to move the dev team to DevSecOps practices. Kolli said that this shift represents a cultural change, and making the transition requires a common understanding among all teams on what the approach means to the engineering lifecycle.

"Teams can make small incremental changes to get on the DevSecOps journey (instead of a large transformation initiative)," she said. "Investing in automation is also a must for continuous integration, continuous testing, continuous code analysis and vulnerability scans." To build the machine learning expertise required to improve and support the My Invisalign app, she has hired team members with that skill set and built up expertise internally.

"We continue to integrate data science to all applications to deliver great visualization experiences and quality outcomes," she said.

Align Technology uses AWS to run its workloads.

In addition to keeping patients connected with orthodontists, the My Invisalign app is a marketing tool to convince families to opt for the transparent but expensive alternative to metal braces.

Kolli said that IT leaders should work closely with business leaders to make sure initiatives support business goals such as revenue growth, improved customer experience, or operational efficiencies, and modernize the IT operation as well.

"Making the line of connection between the technology tasks and agility to go to market helps build shared accountability to keep technical debt in control," she said.

Align Technology released the revamped app in late 2019. In May of this year, the company released a digital version tool for doctors that combines a photo of the patient's face with their 3D Invisalign treatment plan.

This ClinCheck "In-Face" Visualization is designed to help doctors manage patient treatment plans.

The visualization workflow combines three components of Align's digital treatment platform: Invisalign Photo Uploader for patient photos, the iTero intraoral scanner to capture data needed for the 3D model of the patient's teeth, and ClinCheck Pro 6.0. ClinCheck Pro 6.0 allows doctors to modify treatment plans through 3D controls.

These new product releases are the first in a series of innovations to reimagine the digital treatment planning process for doctors, Raj Pudipeddi, Align's chief innovation, product, and marketing officer and senior vice president, said in a press release about the product.

Learn the latest news and best practices about data science, big data analytics, and artificial intelligence. Delivered Mondays

Read more from the original source:
My Invisalign app uses machine learning and facial recognition to sell the benefits of dental work - TechRepublic

Posted in Machine Learning | Comments Off on My Invisalign app uses machine learning and facial recognition to sell the benefits of dental work – TechRepublic

2 books to strengthen your command of python machine learning – TechTalks

Image credit: Depositphotos

This post is part ofAI education, a series of posts that review and explore educational content on data science and machine learning. (In partnership withPaperspace)

Mastering machine learning is not easy, even if youre a crack programmer. Ive seen many people come from a solid background of writing software in different domains (gaming, web, multimedia, etc.) thinking that adding machine learning to their roster of skills is another walk in the park. Its not. And every single one of them has been dismayed.

I see two reasons for why the challenges of machine learning are misunderstood. First, as the name suggests, machine learning is software that learns by itself as opposed to being instructed on every single rule by a developer. This is an oversimplification that many media outlets with little or no knowledge of the actual challenges of writing machine learning algorithms often use when speaking of the ML trade.

The second reason, in my opinion, are the many books and courses that promise to teach you the ins and outs of machine learning in a few hundred pages (and the ads on YouTube that promise to net you a machine learning job if you pass an online course). Now, I dont what to vilify any of those books and courses. Ive reviewed several of them (and will review some more in the coming weeks), and I think theyre invaluable sources for becoming a good machine learning developer.

But theyre not enough. Machine learning requires both good coding and math skills and a deep understanding of various types of algorithms. If youre doing Python machine learning, you have to have in-depth knowledge of many libraries and also master the many programming and memory-management techniques of the language. And, contrary to what some people say, you cant escape the math.

And all of that cant be summed up in a few hundred pages. Rather than a single volume, the complete guide to machine learning would probably look like Donald Knuths famous The Art of Computer Programming series.

So, what is all this tirade for? In my exploration of data science and machine learning, Im always on the lookout for books that take a deep dive into topics that are skimmed over by the more general, all-encompassing books.

In this post, Ill look at Python for Data Analysis and Practical Statistics for Data Scientists, two books that will help deepen your command of the coding and math skills required to master Python machine learning and data science.

Python for Data Analysis, 2nd Edition, is written by Wes McKinney, the creator of the pandas, one of key libraries using in Python machine learning. Doing machine learning in Python involves loading and preprocessing data in pandas before feeding them to your models.

Most books and courses on machine learning provide an introduction to the main pandas components such as DataFrames and Series and some of the key functions such as loading data from CSV files and cleaning rows with missing data. But the power of pandas is much broader and deeper than what you see in a chapters worth of code samples in most books.

In Python for Data Analysis, McKinney takes you through the entire functionality of pandas and manages to do so without making it read like a reference manual. There are lots of interesting examples that build on top of each other and help you understand how the different functions of pandas tie in with each other. Youll go in-depth on things such as cleaning, joining, and visualizing data sets, topics that are usually only discussed briefly in most machine learning books.

Youll also get to explore some very important challenges, such as memory management and code optimization, which can become a big deal when youre handling very large data sets in machine learning (which you often do).

What I also like about the book is the finesse that has gone into choosing subjects to fit in the 500 pages. While most of the book is about pandas, McKinney has taken great care to complement it with material about other important Python libraries and topics. Youll get a good overview of array-oriented programming with numpy, another important Python library often used in machine learning in concert with pandas, and some important techniques in using Jupyter Notebooks, the tool of choice for many data scientists.

All this said, dont expect Python for Data Analysis to be a very fun book. It can get boring because it just discusses working with data (which happens to be the most boring part of machine learning). There wont be any end-to-end examples where youll get to see the result of training and using a machine learning algorithm or integrating your models in real applications.

My recommendation: You should probably pick up Python for Data Analysis after going through one of the introductory or advanced books on data science or machine learning. Having that introductory background on working with Python machine learning libraries will help you better grasp the techniques introduced in the book.

While Python for Data Analysis improves your data-processing and -manipulation coding skills, the second book well look at, Practical Statistics for Data Scientists, 2nd Edition, will be the perfect resource to deepen your understanding of the core mathematical logic behind many key algorithms and concepts that you often deal with when doing data science and machine learning.

The book starts with simple concepts such as different types of data, means and medians, standard deviations, and percentiles. Then it gradually takes you through more advanced concepts such as different types of distributions, sampling strategies, and significance testing. These are all concepts you have probably learned in math class or read about in data science and machine learning books.

But again, the key here is specialization.

On the one hand, the depth that Practical Statistics for Data Scientists brings to each of these topics is greater than youll find in machine learning books. On the other hand, every topic is introduced along with coding examples in Python and R, which makes it more suitable than classic statistics textbooks on statistics. Moreover, the authors have done a great job of disambiguating the way different terms are used in data science and other fields. Each topic is accompanied by a box that provides all the different synonyms for popular terms.

As you go deeper into the book, youll dive into the mathematics of machine learning algorithms such as linear and logistic regression, K-nearest neighbors, trees and forests, and K-means clustering. In each case, like the rest of the book, theres more focus on whats happening under the algorithms hood rather than using it for applications. But the authors have again made sure the chapters dont read like classic math textbooks and the formulas and equations are accompanied by nice coding examples.

Like Python for Data Analysis, Practical Statistics for Data Scientists can get a bit boring if you read it end to end. There are no exciting applications or a continuous process where you build your code through the chapters. But on the other hand, the book has been structured in a way that you can read any of the sections independently without the need to go through previous chapters.

My recommendation: Read Practical Statistics for Data Scientists after going through an introductory book on data science and machine learning. I definitely recommend reading the entire book once, though to make it more enjoyable, go topic by topic in-between your exploration of other machine learning courses. Also keep it handy. Youll probably revisit some of the chapters from time to time.

I would definitely count Python for Data Analysis and Practical Statistics for Data Scientists as two must-reads for anyone who is on the path of learning data science and machine learning. Although they might not be as exciting as some of the more practical books, youll appreciate the depth they add to your coding and math skills.

View post:
2 books to strengthen your command of python machine learning - TechTalks

Posted in Machine Learning | Comments Off on 2 books to strengthen your command of python machine learning – TechTalks

What I Learned From Looking at 200 Machine Learning Tools – Machine Learning Times – machine learning & data science news – The Predictive…

Originally published in Chip Huyen Blog, June 22, 2020

To better understand the landscape of available tools for machine learning production, I decided to look up every AI/ML tool I could find. The resources I used include:

After filtering out applications companies (e.g. companies that use ML to provide business analytics), tools that arent being actively developed, and tools that nobody uses, I got 202 tools. See the full list. Please let me know if there are tools you think I should include but arent on the list yet!

Disclaimer

This post consists of 6 parts:

I. OverviewII. The landscape over timeIII. The landscape is under-developedIV. Problems facing MLOpsV. Open source and open-coreVI. Conclusion

I. OVERVIEW

In one way to generalize the ML production flow that I agreed with, it consists of 4 steps:

I categorize the tools based on which step of the workflow that it supports. I dont include Project setup since it requires project management tools, not ML tools. This isnt always straightforward since one tool might help with more than one step. Their ambiguous descriptions dont make it any easier: we push the limits of data science, transforming AI projects into real-world business outcomes, allows data to move freely, like the air you breathe, and my personal favorite: we lived and breathed data science.

I put the tools that cover more than one step of the pipeline into the category that they are best known for. If theyre known for multiple categories, I put them in the All-in-one category. I also include the Infrastructure category to include companies that provide infrastructure for training and storage. Most of these are Cloud providers.

To continue reading this article click here.

Go here to see the original:
What I Learned From Looking at 200 Machine Learning Tools - Machine Learning Times - machine learning & data science news - The Predictive...

Posted in Machine Learning | Comments Off on What I Learned From Looking at 200 Machine Learning Tools – Machine Learning Times – machine learning & data science news – The Predictive…

Letters to the editor – The Economist

Jul 4th 2020

Artificial intelligence is an oxymoron (Technology quarterly, June 13th). Intelligence is an attribute of living things, and can best be defined as the use of information to further survival and reproduction. When a computer resists being switched off, or a robot worries about the future for its children, then, and only then, may intelligence flow.

I acknowledge Richard Suttons bitter lesson, that attempts to build human understanding into computers rarely work, although there is nothing new here. I was aware of the folly of anthropomorphism as an AI researcher in the mid-1980s. We learned to fly when we stopped emulating birds and studied lift. Meaning and knowledge dont result from symbolic representation; they relate directly to the visceral motives of survival and reproduction.

Great strides have been made in widening the applicability of algorithms, but as Mr Sutton says, this progress has been fuelled by Moores law. What we call AI is simply pattern discovery. Brilliant, transformative, and powerful, but just pattern discovery. Further progress is dependent on recognising this simple fact, and abandoning the fancy that intelligence can be disembodied from a living host.

ROB MACDONALDRichmond, North Yorkshire

I agree that machine learning is overhyped. Indeed, your claim that such techniques are loosely based on the structure of neurons in the brain is true of neural networks, but these are just one type among a wide array of different machine- learning methods. In fact, machine learning in some cases is no more than a rebranding of existing processes. If by machine learning we simply mean building a model using large amounts of data, then good old ordinary least squares (line of best fit) is a form of machine learning.

TOM ARMSTRONGToronto

The scope of your research into green investing was too narrow to condemn all financial services for their woolly thinking (Hotting up, June 20th). You restricted your analysis to microeconomic factors and to the ability of investors to engage with companies. It overlooked the bigger picture: investors can also shape the macro environment by structured engagement with the system itself.

For example, the data you used largely originated from the investor-led Carbon Disclosure Project (for which we hosted the first ever meeting, nearly two decades ago). In addition, investors have also helped shape sustainable-finance plans in Britain, the EU and UN. Investors also sit on the industry-led Taskforce on Climate-related Financial Disclosure, convened by the Financial Stability Board, which has proved effective.

It is critical that governments apply a meaningful carbon price. But if we are to move money at the pace and scale required to deal with climate risk, governments need to reconsider the entire architecture of markets. This means focusing a wide-angled climate lens on prudential regulation, listing rules, accounting standards, investor disclosure standards, valuation conventions and stewardship codes, as well as building on new interpretations of legal fiduciary duty. This work is done most effectively in partnership with market participants. Green-thinking investors can help.

STEVE WAYGOODChief responsible investment officerAviva InvestorsLondon

Estimating indirectly observable GDP in real time is indeed a hard job for macro-econometricians, or wonks, as you call us (Crisis measures, May 30th). Most of the components are either highly lagged, as your article mentioned, or altogether unobservable. But the textbook definition of GDP and its components wont be changing any time soon, as the reader is led to believe. Instead what has always and will continue to change are the proxy indicators used to estimate the estimate of GDP.

MICHAEL BOERMANWashington, DC

Reading Lexingtons account of his garden adventures (June 20th) brought back memories of my own experience with neighbours in Twinsburg, Ohio, in the late 1970s. They also objected to vegetables growing in our front yard (the only available space). We were doing it for the same reasons as Lexington: pleasure, fresh food to eat, and a learning experience for our young children. The neighbours, recently arrived into the suburban middle class, saw it as an affront. They no longer had to grow food for their table. They could buy it at the store and keep it in the deep freeze. Our garden, in their face every day, reminded them of their roots in Appalachian poverty. They called us hillbillies.

Arthur C. Clarke once wrote: Any sufficiently advanced technology is indistinguishable from magic. Our version read, Any sufficiently advanced lifestyle is indistinguishable from hillbillies.

PHILIP RAKITAPhiladelphia

Bartleby (May 30th) thinks the benefits of working from home will mean that employees will not want to return to the office. I am not sure that is the case for many people. My husband is lucky. He works for a company that already expected its staff to work remotely, so had the systems and habits in place. He has a spacious room to work in, with an adjustable chair, large monitor and a nice view. I do not work so he is not responsible for child care or home schooling.

Many people are working at makeshift workspaces which would make an occupational therapist cringe. Few will have a dedicated room for their home office, so their work invades their mental and physical space.

My husband has noticed that meetings are being set up both earlier and later in the day because there is an assumption that, as people are not commuting, it is fine to extend their work day. Colleagues book a half-hour meeting instead of dropping by someones desk to ask a quick question. Any benefit of not commuting is lost. My husband still struggles to finish in time to have dinner with our children. People with especially long commutes now have more time, but even that was a change of scenery and offered some incidental exercise.

JENNIFER ALLENLondon

As Bartleby pointed out, the impact of pandemic working conditions wont be limited to the current generation. By exacerbating these divides, will covid-19 completely guarantee a future dominated by the baby-Zoomers?

MALCOLM BEGGTokyo

The transition away from the physical office engenders a lackadaisical approach to the work day for many workers. It brings to mind Ignatius Reillys reasoning for his late start at the office from A Confederacy of Dunces:

I avoid that bleak first hour of the working day during which my still sluggish senses and body make every chore a penance. I find that in arriving later, the work which I do perform is of a much higher quality.

ROBERT MOGIELNICKIArlington, Virginia

This article appeared in the Letters section of the print edition under the headline "On artificial intelligence, green investing, GDP, gardens, working from home"

Original post:
Letters to the editor - The Economist

Posted in Machine Learning | Comments Off on Letters to the editor – The Economist

Machine learning finds use in creating sharper maps of ‘ecosystem’ lines in the ocean – Firstpost

EOSJul 01, 2020 14:54:08 IST

On land, its easy for us to see divisions between ecosystems: A rain forests fan palms and vines stand in stark relief to the cacti of a high desert. Without detailed data or scientific measurements, we can tell a distinct difference in the ecosystems flora and fauna.

But how do scientists draw those divisions in the ocean? A new paper proposes a tool to redraw the lines that define an oceans ecosystems, lines originally penned by the seagoing oceanographerAlan Longhurstin the 1990s. The paper uses unsupervised learning, a machine learning method, to analyze the complex interplay between plankton species and nutrient fluxes. As a result, the tool could give researchers a more flexible definition of ecosystem regions.

Using the tool on global modeling output suggests that the oceans surface has more than 100 different regions or as few as 12 if aggregated, simplifying the56 Longhurst regions. The research could complement ongoing efforts to improve fisheries management and satellite detection of shifting plankton under climate change. It could also direct researchers to more precise locations for field sampling.

A sea turtle in the aqua blue waters of Hawaii. Image: Rohit Tandon/Unsplash

Coccolithophores, diatoms, zooplankton, and other planktonic life-formsfloaton much of the oceans sunlit surface. Scientists monitor plankton with long-term sampling stations and peer at their colorsby satellitefrom above, but they dont have detailed maps of where plankton lives worldwide.

Models help fill the gaps in scientists knowledge, and the latest research relies on an ocean model to simulate where 51 types of plankton amass on the surface oceans worldwide. The latest research then applies the new classification tool, called the systematic aggregated ecoprovince (SAGE) method, to discern where neighborhoods of like-minded plankton and nutrients appear.

SAGE relies, in part, on a type of machine learning algorithm called unsupervised learning. The algorithms strength is that it searches for patterns unprompted by researchers.

To compare the tool to a simple example, if scientists told an algorithm to identify shapes in photographs like circles and squares, the researchers could supervise the process by telling the computer what a square and circle looked like before it began. But in unsupervised learning, the algorithm has no prior knowledge of shapes and will sift through many images to identify patterns of similar shapes itself.

Using an unsupervised approach gives SAGE the freedom to let patterns emerge that the scientists might not otherwise see.

While my human eyes cant see these different regions that stand out, the machine can, first author and physical oceanographerMaike Sonnewaldat Princeton University said. And thats where the power of this method comes in. This method could be used more broadly by geoscientists in other fields to make sense of nonlinear data, said Sonnewald.

A machine-learning technique developed at MIT combs through global ocean data to find commonalities between marine locations, based on how phytoplankton species interact with each other. Using this approach, researchers have determined that the ocean can be split into over 100 types of provinces, and 12 megaprovinces, that are distinct in their ecological makeup.

Applying SAGE to model data, the tool noted 115 distinct ecological provinces, which can then be boiled down into 12 overarching regions.

One region appears in the center of nutrient-poor ocean gyres, whereas other regions show productive ecosystems along the coast and equator.

You have regions that are kind of like the regions youd see on land, Sonnewald said.One area in the heart of a desert-like region of the ocean is characterized by very small cells. Theres just not a lot of plankton biomass. The region that includes Perus fertile coast, however, has a huge amount of stuff.

If scientists want more distinctions between communities, they can adjust the tool to see the full 115 regions. But having only 12 regions can be powerful too, said Sonnewald, because it demonstrates the similarities between the different [ocean] basins. The tool was published in arecent paperin the journalScience Advances.

OceanographerFrancois Ribaletat the University of Washington, who was not involved in the study, hopes to apply the tool to field data when he takes measurements on research cruises. He said identifying unique provinces gives scientists a hint of how ecosystems could react to changing ocean conditions.

If we identify that an organism is very sensitive to temperature, so then we can start to actually make some predictions, Ribalet said. Using the tool will help him tease out an ecosystems key drivers and how it may react to future ocean warming.

Jenessa Duncombe.Text 2020. AGU.

This story has been republished from Eosunder the Creative Commons 3.0 license.Read theoriginal story.

Find latest and upcoming tech gadgets online on Tech2 Gadgets. Get technology news, gadgets reviews & ratings. Popular gadgets including laptop, tablet and mobile specifications, features, prices, comparison.

Read more:
Machine learning finds use in creating sharper maps of 'ecosystem' lines in the ocean - Firstpost

Posted in Machine Learning | Comments Off on Machine learning finds use in creating sharper maps of ‘ecosystem’ lines in the ocean – Firstpost