Search Immortality Topics:

Page 40«..1020..39404142..5060..»


A Giant Star Appears to Have Winked Out of Existence – Futurism

Posted: July 3, 2020 at 3:48 am

Misplaced

A gigantic, particularly-bright star just disappeared without a trace.

Its an unusual case: The giant star in the nearby Kinman dwarf galaxy, which was 2.5 times brighter than our Sun, had been observed during a decades worth of observations, Gizmodo reports. But when astronomers went to check up on it in 2019, the star had vanished.

Thankfully, the Trinity College Dublin astronomers have some ideas for what could have happened.

There are two possibilities that the team threw around in their research, published Tuesday in the journal Monthly Notices of the Royal Astronomical Society. First, the star could have drastically decreased its brightness and is also obscured by a cloud of dust. The second is far more mysterious: It could have died and turned into a black hole without ever exploding in a supernova.

If thats the case, it would be the second-ever failed supernova that we know of.

Regardless of which scenario occurred both are consistent with past observations for the star and computer models Gizmodo reports that the scientists missed out on an opportunity to update the models theyd based on the star. But on the flip side, the opportunity to investigate a vanished star makes up for the loss.

We were all pleasantly surprised to find that the stars signature was not present in our first observation, lead researcher Andrew Allan told Gizmodo. We initially hoped for a higher-resolution observation that resembled the past observations, which we would use for our models.

READ MORE: A Massive Star Has Disappeared Without a Trace [Gizmodo]

More on stars: Two Dead Stars Are Orbiting Each Others Corpses Incredibly Fast

Link:
A Giant Star Appears to Have Winked Out of Existence - Futurism

Recommendation and review posted by G. Smith

Peer gallery Hoxton and The Estorick Collection Canonbury reopen – Islington Gazette

Posted: July 3, 2020 at 3:48 am

PUBLISHED: 11:26 02 July 2020 | UPDATED: 11:26 02 July 2020

Bridget Galton

Works by futurist Tullio Crali will be on display when the Estorick Collection in Canonbury reopens this month

Archant

Futurist Tullio Cralis stunning aeropainting and Alex Uries textured canvases offer a feast for the eyes of art-lovers emerging from lockdown

Email this article to a friend

To send a link to this page you must be logged in.

If youve yearned to see art in the flesh during lockdown then the reopening of two local galleries will offer a feast for the eyes.

The Estorick Collection of Modern Italian art in Canonbury opens its doors again on July 15 after a four month closure.

Admission is restricted to eight pre-booked ticket holders in timed slots who can enjoy the permanent collection, which includes a core of Futurist works.

Indeed, Tullio Crali: A Futurist Life, which opened in January, has been now been extended to August 30.

Entry will be contactless, with floor arrows guiding visitors around the one way system and continuous cleaning throughout the day.

A Futurist Life features more than 60 rarely seen pieces from Cralis family collection, dating from the 1920s to the 1980s.

Born in 1910, Crali is known for powerful imagery inspired by the modern world, technology and the machine. Although he experimented with fashion, theatre, and graphic design, he found his niche in the Futurist inter-war genre of aeropainting which aimed to show the vertiginous, topsy-turvy experience of flying.

Several of his figurative, abstract works in the exhibition capture what he called the immense visual and sensory drama of flight.

In the 1940s, he accompanied pilots on combat and reconnaissance missions creating work for official art programmes under Mussolinis regime for which he was criticized. But in 1945 while running avant-garde cultural events in Gorizia, the occupying Nazi authorities identified him as subversive and earmarked him for deportation. Tipped off by a friend, he was then imprisoned by Titos militia before being liberated by American troops.

Small wonder that he moved to Piedmont and took up painting works inspired by nature, that adhered to futurist aesthetics.

He said: My art changed form, but not substance. A lack of faith in mankind leads me to turn my attention to nature. I search out serenity in everything; I try to discover the movements of nature and to express its vitality. It is the Futurist principle of universal dynamism that is striving to take form.

estorickcollection.com

Meanwhile Peer in Hoxton reopens on July 16 with a show by British Painter Alex Urie.

Theres no need to book but only three people will be allowed in the gallery at a time, children must be supervised, and visitors should wear face coverings and use hand sanitiser.

Uries Silo runs until August 29 and features large scale paintings created by brushing, pouring, and flooding tinted household paint on to untreated canvas, linen or jute. Working on the reverse side of the artwork he then flips it over to reveal a residual sometimes unforeseen image.

Urie works on both sides while the paint is still wet, building layers of colour and texture and adding graphic elements including personal photographs and random online imagery including from Trip Advisor and YouTube.

He says, I approach these canvases like salvaged grounds Although the work is process driven, I continue to question how these paintings might function like errant narrative paintings, how they are tied to location, or might begin to be quite instructional or diagrammatic.

Peer is open Wednesday to Saturday 12-6. 97 and 99 Hoxton Street N1. peeruk.org

If you value what this story gives you, please consider supporting the Islington Gazette. Click the link in the orange box below for details.

See the original post here:
Peer gallery Hoxton and The Estorick Collection Canonbury reopen - Islington Gazette

Recommendation and review posted by G. Smith

Climate Change Threatens 60 Percent of the World’s Fish Species – Futurism

Posted: July 3, 2020 at 3:48 am

Slow Boil

New research suggests that climate change threatens to wipe out significantly more species of fish than previously thought.

If average global temperatures rise by five degrees Celsius thatd be a global warming nightmare scenario then New Scientist reports that 60 percent of all fish species could go extinct by the year 2100. Its grim news, as previous studies predicted that fish would be far more resilient.

Other research gauging the impact of rising water temperatures on fish populations focused exclusively on how well adultfish would be able to adapt. Based on those measurements alone, New Scientist reports that scientists expected only five percent of fish species to die off under the same conditions.

But the new study also takes fish larvae, embryos, and other stages in the fish life cycle into account. And in those phases, the fish are far more vulnerable to higher temperatures.

This is casting light on a life phase that has been largely ignored, Hans-Otto Prtner of the Alfred Wegener Institute told New Scientist.

Thankfully, the year 2100 is still pretty far away, and ambitious efforts to limit climate change could mean many of those species are spared.

We can say 1.5 [degrees] is not paradise, there will be changes, Prtner told New Scientist. But we can limit those changes if we manage to stop climate change. Fish are so important for human nutrition, so this study makes a strong case for protecting our ecosystems and natural environments.

READ MORE: Climate change will make world too hot for 60 per cent of fish species [New Scientist]

More on extinction: Scientists: Human Extinction Is Extremely Likely

Read the original post:
Climate Change Threatens 60 Percent of the World's Fish Species - Futurism

Recommendation and review posted by G. Smith

Deep learning’s role in the evolution of machine learning – TechTarget

Posted: July 2, 2020 at 1:53 pm

Machine learning had a rich history long before deep learning reached fever pitch. Researchers and vendors were using machine learning algorithms to develop a variety of models for improving statistics, recognizing speech, predicting risk and other applications.

While many of the machine learning algorithms developed over the decades are still in use today, deep learning -- a form of machine learning based on multilayered neural networks -- catalyzed a renewed interest in AI and inspired the development of better tools, processes and infrastructure for all types of machine learning.

Here, we trace the significance of deep learning in the evolution of machine learning, as interpreted by people active in the field today.

The story of machine learning starts in 1943 when neurophysiologist Warren McCulloch and mathematician Walter Pitts introduced a mathematical model of a neural network. The field gathered steam in 1956 at a summer conference on the campus of Dartmouth College. There, 10 researchers came together for six weeks to lay the ground for a new field that involved neural networks, automata theory and symbolic reasoning.

The distinguished group, many of whom would go on to make seminal contributions to this new field, gave it the name artificial intelligence to distinguish it from cybernetics, a competing area of research focused on control systems. In some ways these two fields are now starting to converge with the growth of IoT, but that is a topic for another day.

Early neural networks were not particularly useful -- nor deep. Perceptrons, the single-layered neural networks in use then, could only learn linearly separable patterns. Interest in them waned after Marvin Minsky and Seymour Papert published the book Perceptrons in 1969, highlighting the limitations of existing neural network algorithms and causing the emphasis in AI research to shift.

"There was a massive focus on symbolic systems through the '70s, perhaps because of the idea that perceptrons were limited in what they could learn," said Sanmay Das, associate professor of computer science and engineering at Washington University in St. Louis and chair of the Association for Computing Machinery's special interest group on AI.

The 1973 publication of Pattern Classification and Scene Analysis by Richard Duda and Peter Hart introduced other types of machine learning algorithms, reinforcing the shift away from neural nets. A decade later, Machine Learning: An Artificial Intelligence Approach by Ryszard S. Michalski, Jaime G. Carbonell and Tom M. Mitchell further defined machine learning as a domain driven largely by the symbolic approach.

"That catalyzed a whole field of more symbolic approaches to [machine learning] that helped frame the field. This led to many Ph.D. theses, new journals in machine learning, a new academic conference, and even helped to create new laboratories like the NASA Ames AI Research branch, where I was deputy chief in the 1990s," said Monte Zweben, CEO of Splice Machine, a scale-out SQL platform.

In the 1990s, the evolution of machine learning made a turn. Driven by the rise of the internet and increase in the availability of usable data, the field began to shift from a knowledge-driven approach to a data-driven approach, paving the way for the machine learning models that we see today.

The turn toward data-driven machine learning in the 1990s was built on research done by Geoffrey Hinton at the University of Toronto in the mid-1980s. Hinton and his team demonstrated the ability to use backpropagation to build deeper neural networks.

"This was a major breakthrough enabling new kinds of pattern recognition that were previously not feasible with neural nets," Zweben said. This added new layers to the networks and a way to strengthen or weaken connections back across many layers in the network, leading to the term deep learning.

Although possible in a lab setting, deep learning did not immediately find its way into practical applications, and progress stalled.

"Through the '90s and '00s, a joke used to be that 'neural networks are the second-best learning algorithm for any problem,'" Washington University's Das said.

Meanwhile, commercial interest in AI was starting to wane because the hype around developing an AI on par with human intelligence had gotten ahead of results, leading to an AI winter, which lasted through the 1980s. What did gain momentum was a type of machine learning using kernel methods and decision trees that enabled practical commercial applications.

Still, the field of deep learning was not completely in retreat. In addition to the ascendancy of the internet and increase in available data, another factor proved to be an accelerant for neural nets, according to Zweben: namely, distributed computing.

Machine learning requires a lot of compute. In the early days, researchers had to keep their problems small or gain access to expensive supercomputers, Zweben said. The democratization of distributed computing in the early 2000s enabled researchers to run calculations across clusters of relatively low-cost commodity computers.

"Now, it is relatively cheap and easy to experiment with hundreds of models to find the best combination of data features, parameters and algorithms," Zweben said. The industry is pushing this democratization even further with practices and associated tools for machine learning operations that bring DevOps principles to machine learning deployment, he added.

Machine learning is also only as good as the data it is trained on, and if data sets are small, it is harder for the models to infer patterns. As the data created by mobile, social media, IoT and digital customer interactions grew, it provided the training material deep learning techniques needed to mature.

By 2012, deep learning attained star status after Hinton's team won ImageNet, a popular data science challenge, for their work on classifying images using neural networks. Things really accelerated after Google subsequently demonstrated an approach to scaling up deep learning across clusters of distributed computers.

"The last decade has been the decade of neural networks, largely because of the confluence of the data and computational power necessary for good training and the adaptation of algorithms and architectures necessary to make things work," Das said.

Even when deep neural networks are not used directly, they indirectly drove -- and continue to drive -- fundamental changes in the field of machine learning, including the following:

Deep learning's predictive power has inspired data scientists to think about different ways of framing problems that come up in other types of machine learning.

"There are many problems that we didn't think of as prediction problems that people have reformulated as prediction problems -- language, vision, etc. -- and many of the gains in those tasks have been possible because of this reformulation," said Nicholas Mattei, assistant professor of computer science at Tulane University and vice chair of the Association for Computing Machinery's special interest group on AI.

In language processing, for example, a lot of the focus has moved toward predicting what comes next in the text. In computer vision as well, many problems have been reformulated so that, instead of trying to understand geometry, the algorithms are predicting labels of different parts of an image.

The power of big data and deep learning is changing how models are built. Human analysis and insights are being replaced by raw compute power.

"Now, it seems that a lot of the time we have substituted big databases, lots of GPUs, and lots and lots of machine time to replace the deep problem introspection needed to craft features for more classic machine learning methods, such as SVM [support vector machine] and Bayes," Mattei said, referring to the Bayesian networks used for modeling the probabilities between observations and outcomes.

The art of crafting a machine learning problem has been taken over by advanced algorithms and the millions of hours of CPU time baked into pretrained models so data scientists can focus on other projects or spend more time on customizing models.

Deep learning is also helping data scientists solve problems with smaller data sets and to solve problems in cases where the data has not been labeled.

"One of the most relevant developments in recent times has been the improved use of data, whether in the form of self-supervised learning, improved data augmentation, generalization of pretraining tasks or contrastive learning," said Juan Jos Lpez Murphy, AI and big data tech director lead at Globant, an IT consultancy.

These techniques reduce the need for manually tagged and processed data. This is enabling researchers to build large models that can capture complex relationships representing the nature of the data and not just the relationships representing the task at hand. Lpez Murphy is starting to see transfer learning being adopted as a baseline approach, where researchers can start with a pretrained model that only requires a small amount of customization to provide good performance on many common tasks.

There are specific fields where deep learning provides a lot of value, in image, speech and natural language processing, for example, as well as time series forecasting.

"The broader field of machine learning is enhanced by deep learning and its ability to bring context to intelligence. Deep learning also improves [machine learning's] ability to learn nonlinear relationships and manage dimensionality with systems like autoencoders," said Luke Taylor, founder and COO at TrafficGuard, an ad fraud protection service.

For example, deep learning can find more efficient ways to auto encode the raw text of characters and words into vectors representing the similarity and differences of words, which can improve the efficiency of the machine learning algorithms used to process it. Deep learning algorithms that can recognize people in pictures make it easier to use other algorithms that find associations between people.

More recently, there have been significant jumps using deep learning to improve the use of image, text and speech processing through common interfaces. People are accustomed to speaking to virtual assistants on their smartphones and using facial recognition to unlock devices and identify friends in social media.

"This broader adoption creates more data, enables more machine learning refinement and increases the utility of machine learning even further, pushing even further adoption of this tech into people's lives," Taylor said.

Early machine learning research required expensive software licenses. But deep learning pioneers began open sourcing some of the most powerful tools, which has set a precedent for all types of machine learning.

"Earlier, machine learning algorithms were bundled and sold under a licensed tool. But, nowadays, open source libraries are available for any type of AI applications, which makes the learning curve easy," said Sachin Vyas, vice president of data, AI and automation products at LTI, an IT consultancy.

Another factor in democratizing access to machine learning tools has been the rise of Python.

"The wave of open source frameworks for deep learning cemented the prevalence of Python and its data ecosystem for research, development and even production," Globant's Lpez Murphy said.

Many of the different commercial and free options got replaced, integrated or connected to a Python layer for widespread use. As a result, Python has become the de facto lingua franca for machine learning development.

Deep learning has also inspired the open source community to automate and simplify other aspects of the machine learning development lifecycle. "Thanks to things like graphical user interfaces and [automated machine learning], creating working machine learning models is no longer limited to Ph.D. data scientists," Carmen Fontana, IEEE member and cloud and emerging tech practice lead at Centric Consulting, said.

For machine learning to keep evolving, enterprises will need to find a balance between developing better applications and respecting privacy.

Data scientists will need to be more proactive in understanding where their data comes from and the biases that may inadvertently be baked into it, as well as develop algorithms that are transparent and interpretable. They also need to keep pace with new machine learning protocols and the different ways these can be woven together with various data sources to improve applications and decisions.

"Machine learning provides more innovative applications for end users, but unless we're choosing the right data sets and advancing deep learning protocols, machine learning will never make the transition from computing a few results to providing actual intelligence," said Justin Richie, director of data science at Nerdery, an IT consultancy.

"It will be interesting to see how this plays out in different industries and if this progress will continue even as data privacy becomes more stringent," Richie said.

Originally posted here:
Deep learning's role in the evolution of machine learning - TechTarget

Recommendation and review posted by Ashlie Lopez

My Invisalign app uses machine learning and facial recognition to sell the benefits of dental work – TechRepublic

Posted: July 2, 2020 at 1:53 pm

Align Technology uses DevSecOps tactics to keep complex projects on track and align business and IT goals.

Image: AndreyPopov/Getty Images/iStockphoto

Align Technology's Chief Digital Officer Sreelakshmi Kolli is using machine learning and DevOps tactics to power the company's digital transformation.

Kolli led the cross-functional team that developed the latest version of the company's My Invisalign app. The app combines several technologies into one product including virtual reality, facial recognition, and machine learning. Kolli said that using a DevOps approach helped to keep this complex work on track.

"The feasibility and proof of concept phase gives us an understanding of how the technology drives revenue and/or customer experience," she said. "Modular architecture and microservices allows incremental feature delivery that reduces risk and allows for continuous delivery of innovation."

SEE: Research: Microservices bring faster application delivery and greater flexibility to enterprises (TechRepublic Premium)

The customer-facing app accomplishes several goals at once, the company said:

More than 7.5 million people have used the clear plastic molds to straighten their teeth, the company said. Align Technology has used data from these patients to train a machine learning algorithm that powers the visualization feature in the mobile app. The SmileView feature uses machine learning to predict what a person's smile will look like when the braces come off.

Kolli started with Align Technology as a software engineer in 2003. Now she leads an integrated software engineering group focused on product technology strategy and development of global consumer, customer and enterprise applications and infrastructure. This includes end user and cloud computing, voice and data networks and storage. She also led the company's global business transformation initiative to deliver platforms to support customer experience and to simplify business processes.

Kolli used the development process of the My Invisalign app as an opportunity to move the dev team to DevSecOps practices. Kolli said that this shift represents a cultural change, and making the transition requires a common understanding among all teams on what the approach means to the engineering lifecycle.

"Teams can make small incremental changes to get on the DevSecOps journey (instead of a large transformation initiative)," she said. "Investing in automation is also a must for continuous integration, continuous testing, continuous code analysis and vulnerability scans." To build the machine learning expertise required to improve and support the My Invisalign app, she has hired team members with that skill set and built up expertise internally.

"We continue to integrate data science to all applications to deliver great visualization experiences and quality outcomes," she said.

Align Technology uses AWS to run its workloads.

In addition to keeping patients connected with orthodontists, the My Invisalign app is a marketing tool to convince families to opt for the transparent but expensive alternative to metal braces.

Kolli said that IT leaders should work closely with business leaders to make sure initiatives support business goals such as revenue growth, improved customer experience, or operational efficiencies, and modernize the IT operation as well.

"Making the line of connection between the technology tasks and agility to go to market helps build shared accountability to keep technical debt in control," she said.

Align Technology released the revamped app in late 2019. In May of this year, the company released a digital version tool for doctors that combines a photo of the patient's face with their 3D Invisalign treatment plan.

This ClinCheck "In-Face" Visualization is designed to help doctors manage patient treatment plans.

The visualization workflow combines three components of Align's digital treatment platform: Invisalign Photo Uploader for patient photos, the iTero intraoral scanner to capture data needed for the 3D model of the patient's teeth, and ClinCheck Pro 6.0. ClinCheck Pro 6.0 allows doctors to modify treatment plans through 3D controls.

These new product releases are the first in a series of innovations to reimagine the digital treatment planning process for doctors, Raj Pudipeddi, Align's chief innovation, product, and marketing officer and senior vice president, said in a press release about the product.

Learn the latest news and best practices about data science, big data analytics, and artificial intelligence. Delivered Mondays

Read more from the original source:
My Invisalign app uses machine learning and facial recognition to sell the benefits of dental work - TechRepublic

Recommendation and review posted by Ashlie Lopez

2 books to strengthen your command of python machine learning – TechTalks

Posted: July 2, 2020 at 1:53 pm

Image credit: Depositphotos

This post is part ofAI education, a series of posts that review and explore educational content on data science and machine learning. (In partnership withPaperspace)

Mastering machine learning is not easy, even if youre a crack programmer. Ive seen many people come from a solid background of writing software in different domains (gaming, web, multimedia, etc.) thinking that adding machine learning to their roster of skills is another walk in the park. Its not. And every single one of them has been dismayed.

I see two reasons for why the challenges of machine learning are misunderstood. First, as the name suggests, machine learning is software that learns by itself as opposed to being instructed on every single rule by a developer. This is an oversimplification that many media outlets with little or no knowledge of the actual challenges of writing machine learning algorithms often use when speaking of the ML trade.

The second reason, in my opinion, are the many books and courses that promise to teach you the ins and outs of machine learning in a few hundred pages (and the ads on YouTube that promise to net you a machine learning job if you pass an online course). Now, I dont what to vilify any of those books and courses. Ive reviewed several of them (and will review some more in the coming weeks), and I think theyre invaluable sources for becoming a good machine learning developer.

But theyre not enough. Machine learning requires both good coding and math skills and a deep understanding of various types of algorithms. If youre doing Python machine learning, you have to have in-depth knowledge of many libraries and also master the many programming and memory-management techniques of the language. And, contrary to what some people say, you cant escape the math.

And all of that cant be summed up in a few hundred pages. Rather than a single volume, the complete guide to machine learning would probably look like Donald Knuths famous The Art of Computer Programming series.

So, what is all this tirade for? In my exploration of data science and machine learning, Im always on the lookout for books that take a deep dive into topics that are skimmed over by the more general, all-encompassing books.

In this post, Ill look at Python for Data Analysis and Practical Statistics for Data Scientists, two books that will help deepen your command of the coding and math skills required to master Python machine learning and data science.

Python for Data Analysis, 2nd Edition, is written by Wes McKinney, the creator of the pandas, one of key libraries using in Python machine learning. Doing machine learning in Python involves loading and preprocessing data in pandas before feeding them to your models.

Most books and courses on machine learning provide an introduction to the main pandas components such as DataFrames and Series and some of the key functions such as loading data from CSV files and cleaning rows with missing data. But the power of pandas is much broader and deeper than what you see in a chapters worth of code samples in most books.

In Python for Data Analysis, McKinney takes you through the entire functionality of pandas and manages to do so without making it read like a reference manual. There are lots of interesting examples that build on top of each other and help you understand how the different functions of pandas tie in with each other. Youll go in-depth on things such as cleaning, joining, and visualizing data sets, topics that are usually only discussed briefly in most machine learning books.

Youll also get to explore some very important challenges, such as memory management and code optimization, which can become a big deal when youre handling very large data sets in machine learning (which you often do).

What I also like about the book is the finesse that has gone into choosing subjects to fit in the 500 pages. While most of the book is about pandas, McKinney has taken great care to complement it with material about other important Python libraries and topics. Youll get a good overview of array-oriented programming with numpy, another important Python library often used in machine learning in concert with pandas, and some important techniques in using Jupyter Notebooks, the tool of choice for many data scientists.

All this said, dont expect Python for Data Analysis to be a very fun book. It can get boring because it just discusses working with data (which happens to be the most boring part of machine learning). There wont be any end-to-end examples where youll get to see the result of training and using a machine learning algorithm or integrating your models in real applications.

My recommendation: You should probably pick up Python for Data Analysis after going through one of the introductory or advanced books on data science or machine learning. Having that introductory background on working with Python machine learning libraries will help you better grasp the techniques introduced in the book.

While Python for Data Analysis improves your data-processing and -manipulation coding skills, the second book well look at, Practical Statistics for Data Scientists, 2nd Edition, will be the perfect resource to deepen your understanding of the core mathematical logic behind many key algorithms and concepts that you often deal with when doing data science and machine learning.

The book starts with simple concepts such as different types of data, means and medians, standard deviations, and percentiles. Then it gradually takes you through more advanced concepts such as different types of distributions, sampling strategies, and significance testing. These are all concepts you have probably learned in math class or read about in data science and machine learning books.

But again, the key here is specialization.

On the one hand, the depth that Practical Statistics for Data Scientists brings to each of these topics is greater than youll find in machine learning books. On the other hand, every topic is introduced along with coding examples in Python and R, which makes it more suitable than classic statistics textbooks on statistics. Moreover, the authors have done a great job of disambiguating the way different terms are used in data science and other fields. Each topic is accompanied by a box that provides all the different synonyms for popular terms.

As you go deeper into the book, youll dive into the mathematics of machine learning algorithms such as linear and logistic regression, K-nearest neighbors, trees and forests, and K-means clustering. In each case, like the rest of the book, theres more focus on whats happening under the algorithms hood rather than using it for applications. But the authors have again made sure the chapters dont read like classic math textbooks and the formulas and equations are accompanied by nice coding examples.

Like Python for Data Analysis, Practical Statistics for Data Scientists can get a bit boring if you read it end to end. There are no exciting applications or a continuous process where you build your code through the chapters. But on the other hand, the book has been structured in a way that you can read any of the sections independently without the need to go through previous chapters.

My recommendation: Read Practical Statistics for Data Scientists after going through an introductory book on data science and machine learning. I definitely recommend reading the entire book once, though to make it more enjoyable, go topic by topic in-between your exploration of other machine learning courses. Also keep it handy. Youll probably revisit some of the chapters from time to time.

I would definitely count Python for Data Analysis and Practical Statistics for Data Scientists as two must-reads for anyone who is on the path of learning data science and machine learning. Although they might not be as exciting as some of the more practical books, youll appreciate the depth they add to your coding and math skills.

View post:
2 books to strengthen your command of python machine learning - TechTalks

Recommendation and review posted by Ashlie Lopez


Page 40«..1020..39404142..5060..»