We are a university spin-off registered and based in Singapore (UEN 53334305D) offering B2B sentiment analysis services. We are very different from all other companies providing similar products for three main reasons:
1. OUR SOLUTIONS
You do not need to change your OS, UI or IDE: our APIs are easy to use and to embed in any framework. We offer fine-grained solutions to many subtasks of sentiment analysis, e.g., polarity detection, aspect extraction, subjectivity detection, temporal tagging, named-entity recognition, concept extraction, personality recognition, and sarcasm detection, and they are available in different domains, modalities, and languages.
2. OUR TRANSPARENCY
We show you what data are collected and how each of them is classified. Most companies, instead, adopt a black-box strategy in which they only show you the classification results. This way you can never be sure about how accurate their analysis really is because they usually do not disclose neither the data nor the techniques adopted for classifying such data (which, in most cases, are rather obsolete).
3. OUR APPROACH
NLP research is evolving very fast and the only way to be up-to-date with it is to be fully immersed in academia. We are not just a business company: we are a research lab. We know current and future trends of NLP and we always embed the latest techniques in our APIs. Unlike most companies (which tend to focus only on one facet of the problem), we take a very multidisciplinary approach to sentiment analysis (see below).
Sentiment analysis is a multi-faceted problem that entails many difficult NLP tasks such as intention mining, aspect extraction, personality recognition, sarcasm detection, and more. As a research topic, sentiment analysis is nearer to natural language understanding than it is to NLP research. For this reason, focusing only on one aspect of the problem would be very limiting. Instead, we take a very multidisciplinary approach to sentiment analysis by concomitantly leveraging on recent advances in knowledge representation, mathematics, commonsense reasoning, deep learning, linguistics, and psychology.
We represent commonsense knowledge as a semantic network of concepts linked to each other and to emotions via a set of semantic relationships.
We leverage on several multivariate statistical methods, e.g., LDA, PCA, and multidimensional scaling, for inferential and analogical reasoning.
We adopt the panalogy paradigm to represent knowledge redundantly at three levels and use them concomitantly for commonsense reasoning.
We further develop and apply the most recent deep learning techniques, e.g., RNN, CNN and LSTM, for context-sensitive emotion and sentiment analysis.
We use linguistic patterns to better understand sentence structure by studying how sentiments flow throughout the different parts of the text review.
We leverage on the psychology of emotions for modeling both type and intensity of the emotions conveyed in text and, hence, calculate polarity.
With the recent development of deep learning, research in AI has gained new vigor and prominence. Machine learning, however, suffers from three big issues, namely:
1. Dependency: it requires (a lot of) training data and is domain-dependent;
2. Consistency: different training or tweaking leads to different results;
3. Transparency: the reasoning process is unintelligible (black-box algorithms).
At SenticNet, we address such issues in the context of NLP through a multi-disciplinary approach, termed sentic computing, that aims to bridge the gap between statistical NLP and many other disciplines that are necessary for understanding human language, such as linguistics, commonsense reasoning, and affective computing. Sentic computing, whose term derives from the Latin sensus (as in commonsense) and sentire (root of words such as sentiment and sentience), enables the analysis of text not only at document, page or paragraph level, but also at sentence, clause, and concept level.
This is possible thanks to an approach to NLP that is both top-down and bottom-up: top-down for the fact that sentic computing leverages on symbolic models such as semantic networks and conceptual dependency representations to encode meaning; bottom-up because we use sub-symbolic methods such as deep neural networks and multiple kernel learning to infer syntactic patterns from data. Coupling symbolic and sub-symbolic AI is key for stepping forward in the path from NLP to natural language understanding. Relying solely on machine learning, in fact, is simply useful to make a 'good guess' based on past experience, because sub-symbolic methods only encode correlation and their decision-making process is merely probabilistic. Natural language understanding, however, requires much more than that. To use Noam Chomsky's words, "you do not get discoveries in the sciences by taking huge amounts of data, throwing them into a computer and doing statistical analysis of them: that’s not the way you understand things, you have to have theoretical insights".
SenticNet positions itself as a horizontal technology that serves as a backend to many different business applications in areas such as e-business, e-commerce, e-governance, e-security, e-health, e-learning, e-tourism, e-mobility, e-entertainment, and more. In particular, sentic computing's novelty gravitates around three key shifts:
1. Shift from mono- to multi-disciplinarity – evidenced by the concomitant use of AI and Semantic Web techniques, for knowledge representation and reasoning; mathematics, for carrying out tasks such as graph mining and multi-dimensionality reduction; linguistics, for discourse analysis and pragmatics; psychology, for cognitive and affective modeling; sociology, for understanding social network dynamics and social influence; finally ethics, for understanding related issues about the nature of mind and the creation of emotional machines.
2. Shift from syntax to semantics – enabled by the adoption of the bag-of-concepts model in stead of simply counting word co-occurrence frequencies in text. Working at concept level entails preserving the meaning carried by multi-word expressions such as cloud_computing, which represent ‘semantic atoms’ that should never be broken down into single words. In the bag-of-words model, for example, the concept cloud_computing would be split into computing and cloud, which may wrongly activate concepts related to the weather and, hence, compromise categorization accuracy.
3. Shift from statistics to linguistics – implemented by allowing sentiments to flow from concept to concept based on the dependency relation between clauses. The sentence “iPhone7 is expensive but nice”, for example, is equal to “iPhone7 is nice but expensive” from a bag-of-words perspective. However, the two sentences bear opposite polarity: the former is positive as the user seems to be willing to make the effort to buy the product despite its high price, the latter is negative as the user complains about the price of iPhone7 although he/she likes it.
Sentic computing takes a holistic approach to natural language understanding by handling the many sub-problems involved in extracting meaning and polarity from text. While most works approach it as a simple categorization problem, in fact, sentiment analysis is actually a suitcase research problem that requires tackling many NLP tasks. As Marvin Minsky would say, the expression 'sentiment analysis' itself is a big suitcase (like many others related to affective computing, e.g., emotion recognition or opinion mining) that all of us use to encapsulate our jumbled idea about how our minds convey emotions and opinions through natural language. Sentic computing addresses the composite nature of the problem via a three-layer structure that concomitantly handles tasks such as concept extraction, to deconstruct text into words and multiword expressions, subjectivity detection, to filter out neutral content, named-entity recognition, to locate and classify named entities into pre-defined categories, personality recognition, for distinguishing between different personality types of the users, sarcasm detection, to detect and handle sarcastic expressions, aspect extraction, for enabling aspect-based sentiment analysis, and more.
The core element of sentic computing is SenticNet, a knowledge base of 50,000 commonsense concepts. Unlike many other sentiment analysis resources, SenticNet is not built by manually labeling pieces of knowledge coming from general NLP resources such as WordNet or DBPedia. Instead, it is automatically constructed by applying graph-mining and multi-dimensional scaling techniques on the affective commonsense knowledge collected from three different sources, namely: WordNet-Affect, Open Mind Common Sense and GECKA. This knowledge is represented redundantly at three levels (following Minsky's panalogy principle): semantic network, matrix, and vector space. Subsequently, semantics and sentics are calculated through the ensemble application of spreading activation, neural networks and an emotion categorization model. More details about this process are provided in the latest sentic computing book (chapter 2).
SenticNet can be used for different sentiment analysis tasks including polarity detection, which is performed by means of sentic patterns. Such patterns are applied to the dependency syntactic tree of a sentence, as shown in Fig(a) below. The only two words that have intrinsic polarity are shown in yellow color; the words that modify the meaning of other words in the manner similar to contextual valence shifters are shown in blue. A baseline that completely ignores sentence structure, as well as words that have no intrinsic polarity, is shown in Fig(b): the only two words left are negative and, hence, the total polarity is negative. However, the syntactic tree can be re-interpreted in the form of a ‘circuit’ where the ‘signal’ flows from one element (or subtree) to another, as shown in Fig(c). After removing the words not used for polarity calculation (in white), a circuit with elements resembling electronic amplifiers, logical complements, and resistors is obtained, as shown in Fig(d). More details about this process are provided in the latest sentic computing book (chapter 3).
Do not hesitate to contact us to let us know what kind of data analysis you need for your business. We will help you make up your mind about the solutions that can make it blossom.
Besides APIs for general sentiment analysis tasks, we can create ad-hoc APIs to suit your needs. Our APIs are platform-independent and are available in different domains, modalities, and languages.
You can try our APIs for free for a while before you decide whether they are actually what you need. We will try our best to help you understand your customers and, hence, make them happy.
In the downloads section of our research website, we provide several free resources. The latest version of SenticNet is also available as a Python package and as a free API in many different languages. We also share some code on our GitHub account. Our most advanced tools, however, are pay-per-use services. Here, you can try three demos of such tools for the sentiment analysis tasks of:
• concept extraction
• polarity detection
• aspect extraction
A simple Twitter data visualization tool (Sentic Tweety) is also available here:
Before polarity detection can be performed, multiword expressions need to be extracted from text. Below is a demo of the concept parser, which quickly identifies commonsense concepts from free text without requiring time-consuming phrase structure analysis. From a sentence like “I went to the market to buy fruits and vegetables”, the parser extracts concepts such as go_to_market, market, buy_fruit, and buy_vegetable. The parser makes use of linguistic patterns to deconstruct natural language text into meaningful pairs, e.g., ADJ+NOUN, VERB+NOUN, and NOUN+NOUN, and then exploits commonsense knowledge to infer which of such pairs are more relevant in the current context. In this demo, the output is limited to 15 concepts.
Polarity detection is the most basic task of sentiment analysis and consists in the binary classification of text as either positive or negative. The demo leverages on linguistic patterns and relies on machine learning in case no patterns are matched. Please note that the task of subjectivity detection is not addressed here. Hence, the demo assumes that the input sentence is opinionated (not neutral). Also, the current version of the demo does not deal with comparative sentences such as "I love iPhone but Android is so much better".
Aspect extraction is a necessary pre-processing step for aspect-based sentiment analysis, i.e., the detection of polarity with respect to different product or service features (aspects) in stead of the overall polarity of the opinion. This is key for correctly calculating the polarity of sentences in which antithetic opinions about different aspects of the same product are expressed. From a sentence like “the touchscreen is good but the battery lasts very little”, for example, the aspect parser extracts touchscreen and battery.
Erik is the Founder of SenticNet and an Assistant Professor at Nanyang Technological University, where he teaches and conducts research on natural language processing and information retrieval.
Alberto is the CEO of SenticNet and an expert of communications, corporate strategy, and product design, who specializes in finding emergent synergies, both in technology and business.
Leaf is the CMO of SenticNet and an entrepreneur with over twelve years of experience in investment management consulting, marketing communication, public relations and financial journalism.
Chen is a skilled project manager and a very knowledgeable information technology professional with years of experience in business analytics and development using the power of data analytics.
Prateek is an expert of deep learning and natural language processing, which he further develops and applies to solve sentiment analysis tasks such as personality recognition and sarcasm detection.
Anirban is an experienced software designer and developer with excellent analytical skills in Microsoft Technologies (SharePoint, ASP.NET/C#, SSRS) and other web technologies (HTML/CSS, XML, JSON).