Archive for the ‘Uncategorized’ Category

Digital Trowel was founded to help alleviate the information overload that inevitably is taking place thanks to the growth of the web. One part of what we do is gather company and executive information that we provide as relevant and up-to-date business information.

One of our biggest challenges is determining if a company from one source is the same one as we found in another source. Using our proprietary semantic Identity Resolution engine, we perform advanced matching to know if this company is the same as that company. This enables us to avoid duplicates in the database and to combine multiple information sources to create rich company and executive profiles.  This process is referred to as “Identity Resolution” or “Match & Merge.”

Our matching system is based on advanced semantic & statistical models, natural language processing and machine learning. Using our massive database of web-sourced data the system has created sets of positive and negative rules to decide if two contacts should be matched.

Positive Rules

The inputs are first exposed to a defined rule set made up of certain positive rules that are meant to examine the likelihood that the two inputs refer to the same entity. The names of the companies are examined first and based on the likelihood that they refer to the same company the pair is given a rating between 0 and 1.

For example: Luigi’s Pizzeria and Luigi’s Italian Restaurant will be matched and assigned a score of 1 as it is likely that the two names refer to the same company.

Next, the contact information of each company is examined, in order to ascertain a true connection between the two. This includes the physical addresses of the two entities, the phone numbers, URL’s, employee information and so on. Based on the similarities between all this information, the two entities are combined into a “matched set.”

“Fuzzy Matching” is utilized in the identification of spelling mistakes, and an advanced semantic process analyzes the actual meaning of the content, allowing the system to take into account synonyms and similar-meaning words (such as “restaurant” and diner). In addition, an abbreviation process considers if YMCA is its own name, or just a short hand for Young Men’s Christian Association.

Negative Rules

Every “matched set” is then processed through a defined rule set made up of negative rules that are meant to examine the likelihood that, regardless of the “matched set” status assigned after successfully meeting the standards of the positive rules, these two entities are different companies. The DUNS number, stock symbol and other recognized information is examined for any discrepancies. After this examination, if there is a conflict in information the set is assigned a “problematic” status, re-examined and either accepted as a match or dismissed as opposing companies.


After two entities are matched, the next step is to merge them correctly into one profile. Sources are assigned priority levels based on the quality, accuracy and recency of the information, so that any conflicting data can be adjusted according to the source with the highest priority. Priority and source quality is assigned independently per attribute and in this way we can refer to the most accurate information provided on one topic regardless of the accuracy of the remaining information provided by the source.

For example: If one source has great company financial data but poor phone records, aside from the overall priority score assigned to the source as a whole, each one of these attributes receives a quality and priority ranking; the source would be considered when dealing with financial information but basically disregarded when handling phone records. In this way the most relevant and accurate information per category is utilized.

Precision and Recall

We need to strike the optimal balance between precision and recall. For our purposes, precision equals the fraction of information that is correctly retrieved while recall equals the fraction of information retrieved relative to what’s available. We have created advanced tools that allow us to dial the precision and recall up or down, to find the right balance. The more “loosely“ we match, the greater the opportunity is to extract more information on the topic, leading to higher recall, the “tighter” the match, the less likely we are to connect the dots incorrectly, improving precision.

Depending on our customers wants and expectations, we can select the appropriate balance between precision and recall to offer the most accurate and rich concentration of knowledge to our customers.


Read Full Post »

As the closing to this series, this post  will concentrate on how to use web-mined event information as variables in modeling and/or decisioning. For simplicity’s sake, I will break it down by sub-topic.

Introducing Event Data into Predictive Models

Event data can easily be introduced as predictor (independent) variables within pre-existing risk and marketing models in two basic modes.  The first adds them as later stage variables, which means the pre-existing model variables are entered on a forced basis (which replicates the current state of the model), and the event variables are subsequently allowed to enter.  This process ensures that the event variables are evaluated for their incremental contribution to the model, and do not displace any pre-existing model variables.  In contrast, the alternative mode starts the model development from scratch, and pre-existing model variables might be replaced by the event variables.  This approach may outmode the current model, but yield a more optimized set of factors.

The event input data should be coded into event types as well as time periods. For example, the number of litigation occurrences in the last 3, 6,12,18,24+ months.  As a simple example, I’ve found very high correlation between the number of lawsuits from different parties and payment delinquency.  Sometimes, source and quantity are desirable, but from a practical perspective they create significant complexity (a single litigation event might now be exploded into many different combinations of source and amount which need to be individually tested).

Treating Events as Triggers

Sometimes, events are hugely significant in their impact on risk and/or reward.  As an obvious example, M&A, which (believe it or not) is a variable ignored in risk models.  The affects of these events cannot be easily quantified in models, and so they are best treated as “triggers” or decisioning input (for subsequent manual review and intervention).

Sentiment Analysis

Sentiment Analysis of companies is one of the more interesting qualitative pieces of data that has recently become available, due to advances in web mining. Briefly stated, sentiment measures the positive or negative “buzz” about a company.  The firms that utilize product sentiment analysis use varying sources and methods to produce “sentiment scores”.  Minimally, sentiment analysis can be used to corroborate certain decisions, and may have predictive ability as well.  Like events, sentiment scores can be used as time-based model variables, or as external triggers.

To conclude, I would like to impress that there should be no doubt that select business events affect the risk and opportunity value of a company.  Event data, and its accompanying sentiment, is available on a near-real time basis on the Internet.  Semantic analysis companies (such as Digital Trowel) have created a process that mines this data and presents it in coded form, which can be made available to scoring and decision models, as well as human monitors.  Virtually any company relying on risk and/or potential models can incorporate this powerful information to enhance its accuracy, by employing them either as internal variables or as external decision factors.

Please contact me with any questions or comments. I can be reached by commenting on the blog, or via email at Steve (at) digitaltrowel.com

Check back soon for more in-depth exploration of the growing text-mining phenomenon.


Read Full Post »

I would now like to explore the concept of “Business Events,” particularly their affect on company risk. First things first, let’s define risk.  The traditional definition of risk is a company will be unable to make the required payments on its debt obligations.  This, of course, is a narrow financial definition, and if you’re a lender that’s probably exactly what you care about. If you’re a supplier, on the other hand, you probably view risk in a larger scope; for example, is your customer having financial difficulties and will he demand to renegotiate payment terms on a more extended basis, renegotiate pricing in a downward direction, reduce his order commitments, and so on?  Furthermore, although many risk models have a 1-2 year horizon, a short-term view is also needed, and web-data can be used in that short-term (1-6 month) context.

Regardless of whether you’re a lender, supplier, analyst or salesperson, here are some events that negatively impact the growth behavior of a company, and that can be mined from web-data in a very updated manner:

  • Litigation: When a company starts to have cash flow problems, one of its first reactions is to delay payment to some suppliers.  At some point, payment delinquency moves from the “tolerant” stage to the litigation stage.  But what happens if you’re modeling the risk of a company and do not have access to their AP/AR data?  How do you recognize litigation without waiting for it to possibly appear in a financial report?  Fortunately, there are fee-based web-based sources that detect and track litigation including LexisNexis, Public Access to Court Electronic Records (PACER), and D&B.  Publicized litigation that has made it into the media can be obtained at little or no direct cost, and recent (2010) examples of major litigation include BP,  Microsoft’s suit against Salesfore.com (patent infringement), Borg-Warner (asbestos product liability), and Chrysler (failure to pay suppliers).  Of course, the above companies are large enough to withstand the litigation payouts to avoid default; but what does this do to their sales & marketing budget and supplier terms?  In our more expanded view of risk, these are important topics!
  • Analyst Recommendations: Analyst recommendations often, and quickly, affect a company’s stock price.  Downward recommendations that cause the stock to fall, place pressure on the company to compensate. A typical reaction is to cut expenses in order to boost earnings. Of course, this action does not bode well for the company’s S&M efforts, or their suppliers.
  • Partnerships: Partnerships usually indicate positive growth activity, and by logical extension, lower the company risk.
  • M&A: M&A logically reduces the target company’s risk.  Although M&A (and even its announcement) should immediately change the risk score of the target company, this is usually not the case, since the scoring models have no way of quickly recognizing the event.
  • Key employee movement: When a company hires a heavyweight Senior executive, it is invariably a growth move, which should lower risk (otherwise they would likely not take the new position).
  • Insider trading: The purchasing of shares by insiders is often a leading indicator that they expect the stock will go up in the near future (which is itself a leading indicator that the company will expand due to its increased market cap)
  • Product introductions:  A new product introduction is typically a leading indicator of growth, hype, success, and similar; these are all leading indicators that reflect a lowering of risk.
  • Product recalls (pharma): At a minimum, product recalls offer a negative distraction to sales.  Sometimes, for example in the Pharma sector, recalls can have a devastating affect on sales.  Sometimes, for example in the auto industry, they may have a more temporary affect. But in either case, they diminish the strength of a company.
  • Financial announcements:  Financial announcements are excellent leading indicators, on the upside and downside. They appear on the Internet well before they appear in the financial statements that are used to drive typical company risk models.   Competitive tracking:  Significant changes in competitive activity greatly affect market potential models, and could well affect risk models.
  • Competitive monitoring becomes increasingly important in economic downturns, since supplier loyalty is overshadowed by the customer need for cost reductions.   Generally speaking, as direct competition grows, it becomes more formidable to deal with, and the competitive events including product, financial, employment, and so on should be quantified and incorporated into both risk and marketing models .Whew! Now that we are all caught up on Business Events, check back for the third and final post of the series that will tie everything together.

Check back in a couple of days for Part 3: Using Web Mined Data to Enhance the Performance of Business Risk and Opportunity Models

Please contact me with any questions or comments. I can be reached by commenting on the blog, or via email at Steve (at) digitaltrowel.com


– Steve

Read Full Post »

As you may know, business risk models have not fundamentally changed over the past 40 years.  The famed Altman Z-score model, first published in 1968 by Edward Altman, is still being used as a pillar in the area of modeling bankruptcy.  Why? Well, because risk models are typically founded on basic financial information such as working capital, total assets, retained earnings, EBIT, equity, sales, and similar financial statistics that reflect fundamental measurements of company health. Since the importance of these basic financial barometers hasn’t changed over time, the models that employ them haven’t needed to change either.  It is true that improvements in risk model performance can be made by incorporating payment patterns, however this is more suitable for internal customer scoring models, as finding enough reliable and ongoing payment data for an external risk model build and score is difficult indeed!

Having been in the business risk and opportunity-modeling arena for many years, I’ve come to the conclusion that the greatest weakness in business data modeling is quite simply the age of the data.  There is no doubt that a downswing in EBIT spells bad news; but by the time that is recognized in a financial report, it’s very late in the game, and no modeling technique can overcome the limitations of old data.  In my search for a better source of leading indicators, I naturally gravitated to the internet.  After all, the Internet offers an unparalleled rich, dynamic source of data in both quantitative (e.g. financial reports) and qualitative (e.g. sentiment) form, and many of these are inherently powerful leading indicators of both risk and opportunity.

Not coincidentally, statistical package developers such as SAS and SPSS have already launched applications that combine text mining and analytics.  However, for many companies, it will be preferable to gather the data as a separate process, and then integrate it into their modeling/decisioning processes. Recently, I’ve found that incorporation of web data can improve the accuracy/timeliness of risk-based decisions by as much as 20%; even larger benefits can be expected in the area of market potential analytics.

Stay tuned for my upcoming musings on this topic. Part 2: Using Web Mined Data to Enhance the Performance of Business Risk and Opportunity Models

Please contact me with any questions or comments. I can be reached by commenting on the blog, or via email at Steve (at) digitaltrowel.com

Looking forward to an active dialogue.


Read Full Post »

We’re back!

Hey Everyone,
So much has been happening here at Digital Trowel in the past months, and we’ve sort of let the blog fall to the side. But no more!
This blog is now a company-wide affair. You’ll be seeing regular postings from numerous members of our team, about everything from text mining and data analytics, to new product ideas and development updates.
First up – Steve Gasner, our Chief Data Officer, posting about risk modeling.
Please comment and reply. All are welcome. For any other questions, please feel free to email yoni (at) digitaltrowel.com with any questions or comments.

Read Full Post »

Uncovering the Secrets of Synergy

Well, in the previous section we mentioned in passing that our technology was based on a synergistic approach, combining syntax, semantics and pragmatics. In this final part of the survey, we’ll explain just how we do this, and why our system yields unparalleled results. In doing so we’ll do our best to abstract away from the underlying mathematics and details of the machine-learning algorithms, and instead present the linguistic principles by which our algorithms work using examples. To wrap things up, we’ll end this review with a snapshot of what Digital Trowel’s Sentiment Analysis looks like in action.

Our technological approach begins with the observation that sentiment is conveyed on three interacting levels of increasing structural complexity. Namely the lexical, phrasal and semantic-event level of structure. We’ll explain.

Lexical sentiment, sometimes referred to as dictionary-based sentiment, is the sentiment attributed to single isolated words. For example:

great, wonderful, terribly, worrisome, helpful, etc…

Though single words clearly carry sentiment, this is the most rudimentary and least reliable sentiment available. To see this consider the following phrases using the above examples:

Great failure
Wonderful fiasco
Terribly surprising comeback
Worrisome transformation for previous skeptics
Helpful in expediting the demise

It should be evident from the above phrases that the initial or “natural” sentiment associated with the isolated words, have all been transformed if not negated. To avoid such “wonderful fiascoes” in deciphering the sentiment, we employ the lexical analysis of sentiment only after the text has undergone syntactic parsing. In simple words syntactic parsing means that sentences are analyzed to determine their grammatical structure and that each word is assigned its corresponding Part Of Speech (POS) tag.

Consider the following example taken from Cisco’s website (where red and green indicate negative and positive sentiment, respectively):

If Cisco
does not achieve the desired level of acceptances, the company will withdraw the offer and evaluate alternative ways to expand our activities in the video communications market.

To glean the lexical sentiment, the sentence is first parsed, i.e. grammatically analyzed. For starters, this allows us to determine the subject of the sentence (“Cisco”) as well as any pronominal phrase referring to the subject (“the company”) – both of which have been marked in bold above. Naturally, this is of critical import to us is in determining what company the sentiment is to be associated with. Secondly, once we obtain a phrasal structure of the sentence we are able to determine how a candidate lexical entry interacts with clause-mate entries. In the example above, “desired” is typically associated with positive sentiment, but this sentiment is reversed due to the negation “does not” appearing earlier in the clause. On the other hand in the subsequent clause the verb entries “evaluate” and “expand” maintain and even substantiate their positive sentiment, as there is nothing in the clause to alter their natural interpretation.

Obviously, not all lexical entries are born equal. Entries may vary both in the extent to which they convey a sentiment and their relative intra-clausal effect. For example “excellent” conveys a stronger sentiment than “good”, whereas “great” and “superb” generally indicate the same level of positivity, but “great” is more susceptible to lexical negation (cf. “great mistake” vs. “superb mistake”). Different entries therefore receive different weights, depending on their relative sentimental strength and susceptibility to polarity-transformations. In order to correctly assign weight to these words, DT uses advanced statistical models which are generated using large manually-analyzed text corpora. In addition further factors such as conditional, speculative and contra-factual clause structures are taken into account before the final contribution of specific entries are calculated.

But this is only the first and most rudimentary element of our synergistic approach.  The second more complex element is that associated with the phrasal level of structure. The phrasal level of analysis assigns a sentiment value to full phrases rather than to single words. Consider the following examples:

Cisco Chief Executive John Chambers has said the firm aims to gain market share in a tech recovery.

Boosted by those moves and  …  following last year’s 40 percent decline

Company Struggles in Attempt to Buy Time

In the examples above the lexical level may signal certain entries are positive or negative, but  only a real phrase-level analysis can ascertain the sentiment. It is here that we first allow semantic and pragmatic factors to interact. It is not enough to understand the meaning of each word in isolation, the meaning of the entire phrase must be deciphered, and to so  correctly, context is needed.

Take a look for instance at the third example above. Usually when companies buy something, it’s either a product or another company. Here, however, it is clear that an idiomatic meaning is intended (buying time… stalling).

DT’s SA takes pragmatics to a whole new level. Not only do we use carefully developed word-classes to allow our engine to utilize outside knowledge in interpreting text, but, working with a team of linguists and economists we have developed specialized sets of phrase level interpretive rules, which allow the engine to identify the context of a sentence or phrase.  All of this combined with the simple pragmatic module which is used to identify key companies by resolving anaphora and common nicknames and descriptors and you end up with a context identifier that allows our engine to assign sentiment to even highly complicated, idiomatic or obscure phrases. Believe it or not, allowing our semantic and pragmatic modules to collaborate, our engine is able to pick up on sarcastic, wishy-washy, and even ironic notes in the text.

This brings us to the third level of our Synergistic Sentiment Analysis, which is based on the interpretation of actual events within the text. Transcending both lexical and phrasal levels of interpretation, we have trained our engine to identify key economic events, and together with a team of experienced financial experts, we’ve created a scale of positive and negative weights for these events. Take a look at the following examples:

shares of Cisco Systems (Nasdaq: CSCO) were recently up 47 percent

Cisco expects revenue to grow 1 to 4 percent

Cisco(R) (NASDAQ: CSCO) today announced a revised recommended voluntary cash offer to acquire TANDBERG (OSLO: TAA)

All the above are real examples of events captured by our SA engine and marked as positive. We currently have our engine trained to extract and evaluate dozens of types of events including purchases, stock offerings, workforce changes, legal events, product launches or recalling, hiring and firing of key figures, new facilities, bankruptcy, etc… etc…

The event-level of our SA assigns the highest weights since it combines and epitomizes all of our techniques. Using syntactic, semantic and pragmatic analyses to determine the contribution of the event to the sentiment. In fact, we believe that by identifying and analyzing the key events in the text we are emulating just what an expert would do when attempting to estimate the sentiment associated with a given text excerpt.

Starting from the lexical level, which allows us to pick up on subtle tones in the text , building up to phrases which indicate attitude, and embedding these all within a semantic-pragmatic event extractor and economic-analyzer, we believe we are truly able to capture the sentiment of text very much like a human would, with incredible reliability and consistency. We may not have yet passed the Turintg Test, but we’re surely on the way to improve the ability of machines to “understand” the natural language that humans use!

Well, for now that’s all we can show, without divulging too much 🙂

I sincerely hope that you now know better understand Digital Trowel’s pioneering Synergistic Sentiment Analysis technology, and even more so I hope you’ve enjoyed the ride…

The next time someone asks you what Turing’s Test has to do with the stock market, I hope you know where to refer them to..!

Stay tuned for our official product release, and meanwhile, as they say in Boston: Have a good one! 🙂

Read Full Post »

Part 2 – Synergistic Sentiment Analysis:

The Space Between the Lines

Welcome back! Sit down and buckle up for a magical tour of the text mining technology focusing on Sentiment Analysis (SA).

Well, first things first. What is Sentiment Analysis anyway? Rephrasing the Wikipedia definitionSentiment analysis (sometimes called opinion mining) refers to an area of Natural Language Processing (NLP), which aims to determine the attitude of a writer with respect to some topic. This attitude may be their judgment or evaluation, their emotional state when writing or the intended emotional communication the author wishes to convey.

To keep our discussion as concrete as possible we’ll use real life examples to elucidate the different types of attitudes.  Consider the following example:

This year was a setup year for B&N, and 2010 will see its efforts start to pay off […] In 2010, B&N will rack up significant sales of Nooks and e-books, as some consumers look for an Amazon alternative.

Obviously this excerpt contains an explicit positive evaluation for Barnes and Noble for 2010, but moreover the tone is upbeat, optimistic, and even excited. A good Sentiment Analysis would pick up on this tone and report a highly positive sentiment for B&N and their e-reader Nook, whereas a negative or at least an apprehensive sentiment should be reported for Amazon.

The next example is even more blatant:

Belated Happy New Year and already what a year it’s turning out to be for eReaders! […] Time’s a fave around here these days, especially considering its December report naming nook one of the Best Travel Gadgets of 2009 as well as rating the device # 2 among the Top Ten Gadgets of the year. While emphasizing nook’s “classy book-lending feature”, the magazine also cited “the powerful, flexible Android operating system that the whole package runs on.”

The exclamation mark, the rhythm, the tone, the profuse use of superlatives and positive adjectives all indicate an extremely positive sentiment for the nook product. It is clear that the author has a favorable opinion of the product, and moreover that he is quite eager to share his enthusiasm with the readers.

Obviously these are not the only attitudes that can be found on the web. Other attitudes may include anticipation, sarcasm, doubt, apprehension, cynicism and even condemnation. It’s our nature to focus on the good, so we’ll spare you examples of the negative attitudes (well, I guess it’s also that we prefer to avoid any unnecessary lawsuits :)) but the basic idea of what is meant by an underlying attitude should be clear by now.

It’s important to keep in mind that Sentiment Analysis is not severed from the basic meaning of the sentence. Rather, SA picks up on the basic meaning and further capitalizes on the cadence, the tone, the choice of words, and even the absence thereof, to build a complete picture of the message being conveyed. Note that we’ve implicitly drawn a line between some sort of “basic meaning” of a sentence, and the “ultimate intention” of the  message to be conveyed. Let’s try and be a bit more precise and explicit about this distinction.

Formal linguistic theory usually recognizes 3 levels of abstraction for natural language comprehension: Syntax, Semantics and Pragmatics (we are excluding phonology, phonetics and morphology which are irrelevant here). Simply stated, Syntax is the study of the grammatical structure of sentences, Semantics deals with how words are interpreted and how their interpretation is combined to yield the meaning of the sentence, and Pragmatics is the study of how extra-linguistic, real-world knowledge, so to speak, interacts with the basic meaning of sentences to yield the ultimate message conveyed.

So for example, syntactic theories may attempt to explain why the English sentence “I gave that to you” is fine whereas, “You gave that to I” is ungrammatical. Semantic theories may attempt to explain what the meaning of a word such as “tall” is, and how this meaning can be reconciled with seemingly problematic examples such as “I am tall” vs. “The midget is only 4 feet tall”. Pragmatics, goes one step further and attempts to explain how our knowledge of the world, circumstances, etc. play with and alter the meaning of the conveyed message. So for example, although strictly speaking the sentence: “I have 3 children”, does not formally preclude the possibility that I have more than 3, say 5 children, it would generally be considered wrong, or at least odd, for someone who indeed has 5 children to utter the original sentence: “I have 3 children”. To see how this judgment may change with circumstances, imagine Mr. Jones is being interviewed by the IRS, when he is notified by the interviewer that tax benefits are available to anyone with 3 or more children. Under these circumstance, we would probably no longer consider it odd for Mr. Jones to say “I have 3 children”, even if in fact he had 10 children.

So where does Sentiment Analysis fit in this 3-headed theoretical framework? If you’re guessing the answer lies somewhere between semantics and pragmatics, perhaps with a bit of a syntactic-twist, you’re following this introduction just fine. (If, on the other hand, you thought it was limited to the syntax, you may want to go brew yourself a fresh cup of coffee before you reread the last few paragraphs 🙂 ).

Mirroring the theoretical image portrayed above, Natural Language Processing algorithms consist of syntactic algorithms (most notably Part Of Speech (POS) parsers and taggers), semantic algorithms (e.g. semantic rulebooks and relation extraction algorithms) and finally pragmatic algorithms (including for example, contextual disambiguating algorithms, and world knowledge look-up algorithms, used in automated translators for instance). At Digital Trowel we’ve honed our Sentiment Analysis algorithms to combine the strengths of these 3 disciplines to produce the most reliable and comprehensive understanding of the message being conveyed, reading not only the text itself, but also between the lines, so to speak.

The mathematical implementation of these algorithms is beyond the scope of this introduction, but this by no means should prevent us from taking advantage of the knowledge we’ve gained thus far to see how Sentiment Analysis techniques may harness the power of the different types of linguistic algorithms in an attempt to achieve their goal. In fact the lion’s share of the third part of this survey aims to do just that. For now, suffice it to say that one of the main reasons we at DT believe that our technology is superior has to do with our synergistic approach of integrating syntactic, semantic and pragmatic algorithms. This is why we call it Synergistic Sentiment Analysis (SSA). BTW, for those of you wondering, synergy is the term used to describe a situation where different entities cooperate advantageously for a final outcome (tx, Wikipedia!). There, you now understand yet another word in the titles above 🙂

Before ending this part, let’s focus on the goals of SA, or in other words, what SA is good for. Well, in one sentence, as we already phrased it:

Extracting and discerning the underlying sentiment allows us to transform otherwise inert texts into vibrant business opportunities.

But how does this come about? I think the best way to explain is by using an example:

Every day, millions of business news articles are published on the web. Many of these articles contain both facts as well as judgments, predictions, and just plain old sentiment. Obviously, it is impossible for any one human (or even a team of a hundred people) to read all these articles, sieve and sort through them, extract the facts and discern the sentiment, let alone do this all in real time to facilitate decision-making. This is where our SA engine comes in.
In a few seconds, our Sentiment Analysis engine can run through thousands and thousands of articles, sorting them for industry, company, product, etc., extracting key facts and events, and discerning the underlying sentiment. Take the stock market for example. Within less than 10 seconds, our SA engine can scan every article mentioning any NYSE company for example, published within a specified time range. Not only are key facts and events compiled into our database, but a sentiment score is calculated and generated for each ticker, yielding a real time numeric indication of the stock’s vibe for each company on the market! Numeric scores can be translated into an array of decision making procedures, and help with consolidating trading strategies. Now if that isn’t a great business idea, I don’t know what would constitute one!

There are many other business opportunities for the SA technology, including some of which we’ve already implemented at DT such as evaluating pharmaceutical forums for client’s sentiment about drugs, as well as sports product satisfaction, but I think this is enough hype for now 🙂
The third and final part of this introduction to the field of SA, goes a bit deeper into the SA engine itself, and examines the innovative technology unique to Digital Trowel using real examples… Stay tuned, this is where things get really exciting 🙂

Read Full Post »

Older Posts »