Search Immortality Topics:

Page 22«..10..21222324..3040..»


Why Canada needs a national policy for Black arts, culture and heritage – Policy Options

Posted: February 18, 2020 at 5:48 pm

Like the ones before it, this Black History Month is blessed with a cascade of creative programming that will uncover and convey Black Canadas complex and compelling stories through an array of artistic mediums. This includes varied and powerful artistic performances of theatre, music and dance; photography and other visual arts exhibitions; book talks; community tours; film screenings, and so much more.

However, the troubling truth is that, outside of February, consistent and prominent displays of Black creative talent and artistic direction are exceedingly rare in Canada. Beyond Black History Month, Canadas Black creatives and creative industry professionals experience what one of Canadas leading Black professors, Katherine McKittrick, might refer to as an absented presence. This absenting of Canadas Black creatives is especially revealed in the leadership and programming of Canadas dominant cultural institutions, including major galleries, museums, art, film and performance spaces. This is why Canada needs a national policy on Black arts, culture and heritage.

Towards a national arts policy for Black Canadians

A national arts policy for Black Canadians would enable Canadian governments to fulfill the legislated promise of the Canadian Multiculturalism Act. This Act recognizes multiculturalism as a fundamental characteristic of Canadian society. A proposed Black national arts policy, then, would leverage the diverse and dynamic profiles of Canadas Black communities to support our countrys commitment to a policy of multiculturalism designed to preserve and enhance the multicultural heritage of Canadians while working to achieve the equality of all Canadians in the economic, social, cultural and political life of Canada.

A Black Canadian national arts policy would also substantially enhance the principle of multiculturalism as a human rights instrument enshrined in Canadas Constitution in section 27 of the Canadian Charter of Rights and Freedoms. Given the typical absence and erasure of Black arts, culture and heritage in Canada, protecting the preservation and enhancement of the multicultural heritage of Canadians of African descent, through a national Black arts, culture and heritage policy is prudent policy intervention with significant value that transcends party lines.

Because of the aforementioned legal and constitutional provisions, Canadians and parties of all political stripes have a vested national interest in ensuring due respect and presence is afforded to Canadas Black communities through arts, culture and heritage place-making. More specifically, the current government also has an interest in adopting a national Black arts policy because it would markedly enhance Canadas commitment to implement the United Nations International Decade for People of African Descent.

Black Canadas got tremendous talent

For decades, and particularly in the last year couple of years, the artistic excellence of Canadas Black creative talents has abundantly demonstrated that now is the time for Canadas adoption of a national policy for Black arts, culture and heritage.

Consider, for instance, some of the most recent Black Canadian successes in the literary arts alone:

This is to say nothing of Canadas longtime literary treasures Dionne Brand, Andre Alexis, Esi Edugyan, Lawrence Hill, Dany Laferrire, M. NourbeSe Phillip, George Elliott Clarke, the late Austin Clarke, and many more. Theres also a coming tide of gifted breakout writers who are poised to soon follow in these writers footsteps, including Eternity Martis, Zalika Reid-Benta, Kagiso Lesego Molope, Chelene Knight, Desmond Cole, Ta Mutonji, Rebecca Fisseha, Nadia Hohn, Evan Winter, Whitney French, Djamila Ibrahim and Canisia Lubrin.

In music, Black Canadas creative genius is also gaining increasing traction beyond the superstars Drake (including his OVO Sound mega artists and producers) and The Weeknd. For instance, in 2019, the Polaris Music Prize went to rapper Haviah Mighty for her album 13th Floor. Karena Evans is also making her mark as one of the hottest new award-winning video directors. Theres also the increasing embrace by the global hip-hop community of Juno award-winning artist Shad as a trusted and true hip-hop historian thanks to the ballooning success of the Canadian music documentary series Hip-Hop Evolution on Netflix.

In Hollywood, actor Stephan James and his brother, Shamier Anderson, are doing bigger and bigger things in front of the camera while breakout film director and screenwriter Stella Meghies filmmaking career has taken off in the US and Canada; her highly anticipated film The Photograph arrives in theatres this month. Also, actress Vinessa Antoine recently came to national attention as the lead character in Diggstown, the first Canadian drama series to feature a Black Canadian woman as its lead, also produced by fellow Black Canadian Floyd Kane. Finally, there is the growing fame of Winnie Harlow, who continues to change the game as a global fashion model and a public spokesperson with lived experience having the skin condition vitiligo.

These are some of the most prominent Black Canadian creatives recently achieving great successes. Theyre doing so in a way that is defining and refiningwhat it means to be not just be Black, but BlackandCanadian.

Valuing Black arts is valuing Black people

Without a national policy or infrastructure and a strategy to support, sustain and/or nurture the creative and professional growth of the hundreds of thousands of young Black Canadians inspired by the above-mentioned successes, they are left without much needed support to pursue their own creative dreams. This policy gap contributes to the erasure of Black people from Canadas collective consciousness.

This experience of Black Canadian erasure is captured by Black Canadian historian Cecil Foster, who has said: In Canada, the norm has always been to either place blackness on the periphery of society by strategically and selectively celebrating Blacks only as a sign of how tolerant and non-racist white Canadians are (as is seen in the recurrence of the Underground Railroad as a positive achievement in a Canadian mythology of racial tolerance) or to erase blackness as an enduring way of life from the national imaginary.

Canadian policymakers must realize that how Canada treats its Black creatives is an extension of how Canadas Black communities are treated by Canadian society writ large. This connection is captured by a poignant comment made by Toronto hip-hop intellectual Ian Kamau, who has said, Black music and Black art, like Black people, are undervalued in Canada

This undervaluing of Black Canadian voices brings a sense of perpetual social and civic disposability to the Black experience in Canada that can feel suffocating. This undervaluing tends to make being Black in Canada feel like Blackness is only something to be put on display for temporary and specific purposes. Its important that Canada boldly demonstrate that our country finds worth, value and meaning in Black Canadian life well beyond the short and cold days of February. We need to build on the good that comes out of Black History Month.

Black arts, well-being and belonging

Without a long-term, robustly resourced, multi-sectoral and intergovernmental national policy for Black arts, culture and heritage, Canada risks turning celebration into exploitation of Canadas Black creative class (and by extension, of Canadas Black communities). Not having a national framework for birthing, incubating and nurturing Canadas Black talents is a lost opportunity for all Canadians. This is because such a policy would only advance the currency of Canadas global cultural capital.

Finally, while many Black communities love Black History Month, it is also true that for many Black Canadians, it perpetuates a sense of Black disposability. It is a stark contrast to the almost complete loss of positive time and attention that Canadas Black communities are given by governments and mainstream institutions the rest of the year.

A national Black arts, culture and heritage policy would help Black History Month to enhance its commemoration of Canadas Black histories while also serving as a vehicle for an annual launch and exhibition of a year-long display of Black Canadas diverse established and emerging talents. This would go a long way to not only fostering a deeper sense of belonging for Black Canadians (new and old) but also materially advancing the economic well-being of the Black creatives and administrators who too often struggle to support themselves and their art the rest of the year.

The Swahili word for creativity is kuumba, which has become a principle of Kwanzaa, the African diasporas cultural celebration. Its time for an African Canadian Arts Council, and we could call it Kuumba Canada. Because our #BlackArtsMatter.

Photo: Canadian broadcaster and writer Amanda Parris in Toronto at the 2018 Canadian Screen Awards. Last year, she won the Governor Generals Literary Award for Drama.Shutterstockby by Shawn Goldberg.

Do you have something to say about the article you just read? Be part of thePolicy Optionsdiscussion, and send in your own submission.Here is alinkon how to do it.|Souhaitez-vous ragir cet article ?Joignez-vous aux dbats dOptions politiqueset soumettez-nous votre texte en suivant cesdirectives.

Here is the original post:
Why Canada needs a national policy for Black arts, culture and heritage - Policy Options

Recommendation and review posted by G. Smith

Machine learning and clinical insights: building the best model – Healthcare IT News

Posted: February 18, 2020 at 5:46 pm

At HIMSS20 next month, two machine learning experts will show how machine learning algorithms are evolving to handle complex physiological data and drive more detailed clinical insights.

During surgery and other critical care procedures, continuous monitoring of blood pressure to detect and avoid the onset of arterial hypotension is crucial. New machine learning technology developed by Edwards Lifesciences has proven to be an effective means of doing this.

In the prodromal stage of hemodynamic instability, which is characterized by subtle, complex changes in different physiologic variables unique dynamic arterial waveform "signatures" are formed, which require machine learning and complex feature extraction techniques to be utilized.

Feras Hatib, director of research and development for algorithms and signal processing at Edwards Lifesciences, explained his team developed a technology that could predict, in real-time and continuously, upcoming hypertension in acute-care patients, using an arterial pressure waveforms.

We used an arterial pressure signal to create hemodynamic features from that waveform, and we try to assess the state of the patient by analyzing those signals, said Hatib, who is scheduled to speak about his work at HIMSS20.

His teams success offers real-world evidence as to how advanced analytics can be used to inform clinical practice by training and validating machine learning algorithms using complex physiological data.

Machine learning approaches were applied to arterial waveforms to develop an algorithm that observes subtle signs to predict hypotension episodes.

In addition, real-world evidence and advanced data analytics were leveraged to quantify the association between hypotension exposure duration for various thresholds and critically ill sepsis patient morbidity and mortality outcomes.

"This technology has been in Europe for at least three years, and it has been used on thousands of patients, and has been available in the US for about a year now," he noted.

Hatib noted similar machine learning models could provide physicians and specialists with information that will help prevent re-admissions or other treatment options, or help prevent things like delirium current areas of active development.

"In addition to blood pressure, machine learning could find a great use in the ICU, in predicting sepsis, which is critical for patient survival," he noted. "Being able to process that data in the ICU or in the emergency department, that would be a critical area to use these machine learning analytics models."

Hatib pointed out the way in which data is annotated in his case, defining what is hypertension and what is not is essential in building the machine learning model.

"The way you label the data, and what data you include in the training is critical," he said. "Even if you have thousands of patients and include the wrong data, that isnt going to help its a little bit of an art to finding the right data to put into the model."

On the clinical side, its important to tell the clinician what the issue is in this case what is causing hypertension.

"You need to provide to them the reasons that could be causing the hypertension this is why we complimented the technology with a secondary screen telling the clinician what is physiologically is causing hypertension," he explained. "Helping them decide what do to about it was a critical factor."

Hatib said in the future machine learning will be everywhere, because scientists and universities across the globe are hard at work developing machine learning models to predict clinical conditions.

"The next big step I see is going toward using this ML techniques where the machine takes care of the patient and the clinician is only an observer," he said.

Feras Hatib, along with Sibyl Munson of Boston Strategic Partners, will share some machine learning best practices during his HIMSS20 in a session, "Building a Machine Learning Model to Drive Clinical Insights." It's scheduled for Wednesday, March 11, from 8:30-9:30 a.m. in room W304A.

Read more:
Machine learning and clinical insights: building the best model - Healthcare IT News

Recommendation and review posted by Ashlie Lopez

Machine Learning Is No Place To Move Fast And Break Things – Forbes

Posted: February 18, 2020 at 5:46 pm

It is much easier to apologize than it is to get permission.

jamesnoellert.com

The hacking culture has been the lifeblood of software engineering long before the move fast and break things mantra became ubiquitous of tech startups [1, 2]. Computer industry leaders from Chris Lattner [3] to Bill Gates recount breaking and reassembling radios and other gadgets in their youth, ultimately being drawn to computers for their hackability. Silicon Valley itself may have never become the worlds innovation hotbed if it were not for the hacker dojo started by Gordon French and Fred Moore, The Homebrew Club.

Computer programmers still strive to move fast and iterate things, developing and deploying reliable, robust software by following industry proven processes such as test-driven development and the Agile methodology. In a perfect world, programmers could follow these practices to the letter and ship pristine software. Yet time is money. Aggressive, business-driven deadlines pass before coders can properly finish developing software ahead of releases. Add to this the modern best practices of rapid-releases and hot-fixing (or updating features on the fly [4]), the bar for deployable software is even lower. A company like Apple even prides itself by releasing phone hardware with missing software features: the Deep Fusion image processing was part of an iOS update months after the newest iPhone was released [5].

Software delivery becoming faster is a sign of progress; software is still eating the world [6]. But its also subject to abuse: Rapid software processes are used to ship fixes and complete new features, but are also used to ship incomplete software that will be fixed later. Tesla has emerged as a poster child with over the air updates that can improve driving performance and battery capacity, or hinder them by mistake [7]. Naive consumers laud Tesla for the tech-savvy, software-first approach theyre bringing to the old-school automobile industry. Yet industry professionals criticize Tesla for their recklessness: A/B testing [8] an 1800kg vehicle on the road is slightly riskier than experimenting with a new feature on Facebook.

Add Tesla Autopilot and machine learning algorithms into the mix, and this becomes significantly more problematic. Machine learning systems are by definition probabilistic and stochastic predicting, reacting, and learning in a live environment not to mention riddled with corner cases to test and vulnerabilities to unforeseen scenarios.

Massive progress in software systems has enabled engineers to move fast and iterate, for better or for worse. Now with massive progress in machine learning systems (or Software 2.0 [9]), its seamless for engineers to build and deploy decision-making systems that involve humans, machines, and the environment.

A current danger is that the toolset of the engineer is being made widely available but the theoretical guarantees and the evolution of the right processes are not yet being deployed. So while deep learning has the appearance of an engineering profession it is missing some of the theoretical checks and practitioners run the risk of falling flat upon their faces.

In his recent book Reboot AI [10], Gary Marcus draws a thought provoking analogy between deep learning and pharmacology: Deep learning models are more like drugs than traditional software systems. Biological systems are so complex it is rare for the actions of medicine to be completely understood and predictable. Theories of how drugs work can be vague, and actionable results come from experimentation. While traditional software systems are deterministic and debuggable (and thus robust), drugs and deep learning models are developed via experimentation and deployed without fundamental understanding and guarantees. Too often the AI research process is first experiment, then justify results. It should be hypothesis-driven, with scientific rigor and thorough testing processes.

What were missing is an engineering discipline with principles of analysis and design.

Before there was civil engineering, there were buildings that fell to the ground in unforeseen ways. Without proven engineering practices for deep learning (and machine learning at large), we run the same risk.

Taking this to the extreme is not advised either. Consider the shift in spacecraft engineering the last decade: Operational efficiencies and the move fast culture has been essential to the success of SpaceX and other startups such as Astrobotic, Rocket Lab, Capella, and Planet.NASA cannot keep up with the pace of innovation rather, they collaborate with and support the space startup ecosystem. Nonetheless, machine learning engineers can learn a thing or two from an organization that has an incredible track record of deploying novel tech in massive coordination with human lives at stake.

Grace Hopper advocated for moving fast: That brings me to the most important piece of advice that I can give to all of you: if you've got a good idea, and it's a contribution, I want you to go ahead and DO IT. It is much easier to apologize than it is to get permission. Her motivations and intent hopefully have not been lost on engineers and scientists.

[1] Facebook Cofounder Mark Zuckerberg's "prime directive to his developers and team", from a 2009 interview with Business Insider, "Mark Zuckerberg On Innovation".

[2] xkcd

[3] Chris Lattner is the inventor of LLVM and Swift. Recently on the AI podcast, he and Lex Fridman had a phenomenal discussion:

[4] Hotfix: A software patch that is applied to a "hot" system; i.e., a fix to a deployed system already in use. These are typically issues that cannot wait for the next release cycle, so a hotfix is made quickly and outside normal development and testing processes.

[5]

[6]

[7]

[8] A/B testing is an experimental processes to compare two or more variants of a product, intervention, etc. This is very common in software products when considering e.g. colors of a button in an app.

[9] Software 2.0 was coined by renowned AI research engineer Andrej Karpathy, who is now the Director of AI at Tesla.

[10]

[11]

View original post here:
Machine Learning Is No Place To Move Fast And Break Things - Forbes

Recommendation and review posted by Ashlie Lopez

Deploying Machine Learning to Handle Influx of IoT Data – Analytics Insight

Posted: February 18, 2020 at 5:46 pm

The Internet of Things is gradually penetrating every aspect of our lives. With the growth in numbers of internet-connected sensors built into cars, planes, trains, and buildings, we can say it is everywhere. Be it smart thermostats or smart coffee makers, IoT devices are marching ahead into mainstream adoption.

But, these devices are far from perfect. Currently, there is a lot of manual input required to achieve optimal functionality there is not a lot of intelligence built-in. You must set your alarm, tell your coffee maker when to start brewing, and manually set schedules for your thermostat, all independently and precisely.

These machines rarely communicate with each other, and you are left playing the role of master orchestrator, a labor-intensive job.

Every time the IoT sensors gather data, there has to be someone at the backend to classify the data, process them and ensure information is sent out back to the device for decision making. If the data set is massive, how could an analyst handle the influx? Driverless cars, for instance, have to make rapid decisions when on autopilot and relying on humans is completely out of the picture. Here, Machine Learning comes to play.

Tapping into that data to extract useful information is a challenge thats starting to be met using the pattern-matching abilities of machine learning. Firms are increasingly feeding data collected by Internet of Things (IoT) sensors situated everywhere from farmers fields to train tracks into machine-learning models and using the resulting information to improve their business processes, products, and services.

In this regard, one of the most significant leaders is Siemens, whose Internet of Trains project has enabled it to move from simply selling trains and infrastructure to offering a guarantee its trains will arrive on time.

Through this project, the company has embedded sensors in trains and tracks in selected locations in Spain, Russia, and Thailand, and then used the data to train machine-learning models to spot tell-tale signs that tracks or trains may be failing. Having granular insights into which parts of the rail network are most likely to fail, and when, has allowed repairs to be targeted where they are most needed a process called predictive maintenance. That, in turn, has allowed Siemens to start selling what it calls outcome as a service a guarantee that trains will arrive on-time close to 100 percent of the time.

Besides, Thyssenkrupp is one of the earliest firms to pair IoT sensor data with machine learning models, which runs 1.1 million elevators worldwide and has been feeding data collected by internet-connected sensors throughout its elevators into trained machine-learning models for several years. Such models provide real-time updates on the status of elevators and predict which are likely to fail and when, allowing the company to target maintenance where its needed, reducing elevator outages and saving money on unnecessary servicing. Similarly, Rolls-Royce collects more than 70 trillion data points from its engines, feeding that data into machine-learning systems that predict when maintenance is required.

In a recent report, IDC analysts Andrea Minonne, Marta Muoz, Andrea Siviero say that applying artificial intelligence the wider field of study that encompasses machine learning to IoT data is already delivering proven benefits for firms.

Given the huge amount of data IoT connected devices collect and analyze, AI finds fertile ground across IoT deployments and use cases, taking analytics level to uncovered insights to help lower operational costs, provide better customer service and support, and create product and service innovation, they say.

According to IDC, the most common use cases for machine learning and IoT data will be predictive maintenance, followed by analyzing CCTV surveillance, smart home applications, in-store contextualized marketing and intelligent transportation systems.

That said, companies using AI and IoT today are outliers, with many firms neither collecting large amounts of data nor using it to train machine-learning models to extract useful information.

Were definitely still in the very early stages, says Mark Hung, research VP at analyst Gartner.

Historically, in a lot of these use cases in the industrial space, smart cities, in agriculture people have either not been gathering data or gathered a large trove of data and not really acted on it, Hung says. Its only fairly recently that people understand the value of that data and are finding out whats the best way to extract that value.

The IDC analysts agree that most firms are yet to exploit IoT data using machine learning, pointing out that a large portion of IoT users are struggling to go beyond a mere data collection due to a lack of analytics skills, security concerns, or simply because they dont have a forward-looking strategic vision.

The reason machine learning is currently so prominent is because of advances over the past decade in the field of deep learning a subset of ML. These breakthroughs were applied to areas from computer vision to speech and language recognition, allowing computers to see the world around them and understand human speech at a level of accuracy not previously possible.

Machine learning uses different approaches for harnessing trainable mathematical models to analyze data, and for all the headlines ML receives, its also only one of many different methods available for interrogating data and not necessarily the best option.

Dan Bieler, the principal analyst at Forrester, says: We need to recognize that AI is currently being hyped quite a bit. You need to look very carefully whether itd generate the benefits youre looking for whether itd create the value that justifies the investment in machine learning.

Visit link:
Deploying Machine Learning to Handle Influx of IoT Data - Analytics Insight

Recommendation and review posted by Ashlie Lopez

ReversingLabs Releases First Threat Intelligence Platform with Explainable Machine Learning to Automate Incident Response Processes with Verified…

Posted: February 18, 2020 at 5:46 pm

Advances to ReversingLabs Titanium Platform Deliver Transparent and Trusted Malware Insights that Address Security Skills Gap

CAMBRIDGE, Mass., Feb. 18, 2020 (GLOBE NEWSWIRE) -- ReversingLabs, a leading provider of explainable threat intelligence solutions today announced new and enhanced capabilities for its Titanium Platform, including new machine learning algorithm models, explainable classification and out-of-the-box security information and event management (SIEM) plug-ins, security, orchestration, automation and response (SOAR) playbooks, and MITRE ATT&CK Framework support. Introducing a new level of threat intelligence, the Titanium Platform now delivers explainable insights and verification that better support humans in the incident response decision making process. ReversingLabs has been named as a ML-Based Machine Learning Binary Analysis Sample Provider within Gartners 2019 Emerging Technologies and Trends Impact Radar: Security1.. ReversingLabs will showcase its new Titanium Platform at RSA 2020, February 24-28 in San Francisco, Moscone Center, Booth #3311 in the South Expo.

As digital initiatives continue to gain momentum, companies are exposed to an increasing number of threat vectors fueled by a staggering volume of data that contains countless malware infected files and objects, demanding new requirements from the IT teams that support them, said Mario Vuksan, CEO and Co-founder, ReversingLabs. Its no wonder security operations teams struggle to manage incident response. Combine the complexity of threats with blind black box detection engine verdicts, and a lack of analyst experience, skill and time, and teams are crippled by their inability to effectively understand and take action against these increased risks. The current and future threat landscape requires a different approach to threat intelligence and detection that automates time-intensive threat research efforts with the level of detail analysts need to better understand events, improve productivity and refine their skills.

According to Gartners Emerging Technologies and Trends Impact Radar: Security, Gartner estimates that ML-based file analysis has grown at 35 percent over the past year in security technology products with endpoint products being first movers to adopt this new technology.2

Black Box to Glass Box VerdictsBecause signature, AI and machine learning-based threat classifications from black box detection engines come with little to no context, security analysts are left in the dark as to why a verdict was determined, negatively impacting their ability to verify threats, take informed action and extend critical job skills. That lack of context and transparency propelled ReversingLabs to develop a new glass box approach to threat intelligence and detection designed to better inform human understanding first. Security operations teams using ReversingLabs Titanium Platform with patent-pending Explainable Machine Learning can automatically inspect, unpack, and classify threats as before, but with the added capability of verifying these threats in context with transparent, easy to understand results. By applying new machine learning algorithms to identify threat indicators, ReversingLabs enables security teams to more quickly and accurately identify and classify unknown threats.

Key FeaturesAvailable now with Explainable Machine Learning, ReversingLabs platform inspires confidence in threat detection verdicts amongst security operations teams through a transparent and context-aware diagnosis, automating manual threat research with results humans can interpret to take informed action on zero day threats, while simultaneously fueling continuous education and the upskilling of analysts. ReversingLabs Explainable Machine Learning is based on machine learning-based binary file analysis, providing high-speed analysis, feature extraction and classification that can be used to enhance telemetry provided to incident response analysts. Key features of ReversingLabs updated platform include:

Effective machine learning results depend on having the right volume, structure, and quality of data to convert information into a relevant finding, said Vijay Doradla, Chief Business Officer at SparkCognition. With access to ReversingLabs cloud extensive repository, we have the breadth, depth, and scale of data necessary to train our machine learning models. Accurate classification and detection of threats fuels the machine learning-driven predictive security model leveraged in our DeepArmor next-generation endpoint protection platform.

1, 2 Gartner, Emerging Technologies and Trends Impact Radar: Security, Lawrence Pingree, et al, 13 November 2019

About ReversingLabsReversingLabs helps Security Operations Center (SOC) teams identify, detect and respond to the latest attacks, advanced persistent threats and polymorphic malware by providing explainable threat intelligence into destructive files and objects.ReversingLabs technology is used by the worlds most advanced security vendors and deployed across all industries searching for a better way to get at the root of the web, mobile, email, cloud, app development and supply chain threat problem, of which files and objects have become major risk contributors.

ReversingLabs Titanium Platform provides broad integration support with more than 4,000 unique file and object formats, speeds detection of malicious objects through automated static analysis, prioritizing the highest risks with actionable detail in only .005 seconds. With unmatched breadth and privacy, the platform accurately detects threats through explainable machine learning models, leveraging the largest repository of malware in the industry, containing more than 10 billion files and objects. Delivering transparency and trust, thousands of human readable indicators explain why a classification and threat verdict was determined, while integrating at scale across the enterprise with connectors that support existing SIEM, SOAR, threat intelligence platform and sandbox investments, reducing incident response time for SOC analysts, while providing high priority and detailed threat information for hunters to take quick action. Learn more at https://www.reversinglabs.com, or connect on LinkedIn or Twitter.

Media Contact: Jennifer Balinski, Guyer Groupjennifer.balinski@guyergroup.com

Go here to read the rest:
ReversingLabs Releases First Threat Intelligence Platform with Explainable Machine Learning to Automate Incident Response Processes with Verified...

Recommendation and review posted by Ashlie Lopez

Brian Burch Joins zvelo as Head of Artificial Intelligence and Machine Learning to Drive New Growth Initiatives – Benzinga

Posted: February 18, 2020 at 5:46 pm

GREENWOOD VILLAGE, Colo., Feb. 17, 2020 /PRNewswire-PRWeb/ --Driven by a passion for learning and all things data science, Brian Burch has cultivated an exemplary career in building solutions which solve business problems across multiple industries including cybersecurity, financial services, retail, telecommunications, and aerospace. In addition to having a strong technical background across a broad range of vertical markets, Brian brings deep expertise in the areas of Artificial Intelligence and Machine Learning (AI/ML), Software Engineering, and Product Management.

"We are excited about Brian Burch joining the zvelo leadership team," explains zvelo CEO, Jeff Finn. "zvelo is quickly gaining momentum with tremendous growth opportunities built upon the zveloAI platform. Brian brings an impressive background in AI/ML and data science to further zvelo's leadership for URL classification, objectionable and malicious detection and his passion aligns perfectly with zvelo's mission to improve internet safety and security."

From large organizations like CenturyLink and Regions Bank to successful startups like StorePerform Technologies and Cognilytics, Brian has a proven history of leveraging his vast experience in key leadership roles to advance business goals through a fully-immersed, hands-on approach.

"I'm especially excited about combining zvelo's strong web categorization technologies with the latest advances in AI/ML to identify malicious websites, phishing URLs, and malware distribution infrastructure, and play a key role in supporting the mission to make the internet safer for everyone," stated Burch.

About zvelo, Inc. zvelo is a leading provider of web content classification and detection of objectionable, malicious and threat detection services with a mission of making the Internet safer and more secure. zvelo combines advanced artificial intelligence-based contextual categorization with sophisticated malicious and phishing detection capabilities that customers integrate into network and endpoint security, URL and DNS filtering, brand safety, contextual targeting, and other applications where data quality, accuracy, and detection rates are critical.

Learn more at: https://www.zvelo.com

Corporate Information: zvelo, Inc. 8350 East Crescent Parkway, Suite 450 Greenwood Village, CO 80111 Phone: (720) 897-8113 zvelo.com or pr@zvelo.com

SOURCE zvelo

More here:
Brian Burch Joins zvelo as Head of Artificial Intelligence and Machine Learning to Drive New Growth Initiatives - Benzinga

Recommendation and review posted by Ashlie Lopez


Page 22«..10..21222324..3040..»