High-order Co-occurrence and Search Engines Optimimization (seo)

By Jose Nuñez

Latent Semantic Analysis is not purely about words. LSA can also be used with other objects, like for example images, events, optically recognized text and as a dilemma-solving method (Latent Difficulty Solving Analysis, LPSA).

Latent Semantic Analysis itself is not co-occurrence. Normally, far more than 95% of
word-pairs with excellent similarity will never appear together in a given paragraph. If these words were to cooccur in the exact same document, their cosine similarity is not high. But with that said, If words never co-happen, their cosine can still be high.

Latent Semantic Analysis neglects the word order. This is another dilemma. The answer? There is a ” Syntagmatic Paradigmatic Model” which is a memory based mechanism that incorporates the word order, but it does preserve the distributional approach. It addresses the problem of word order and polysemy. It is well recognized that LSA not necessarily accounts terms with various meanings in different contexts or keyword sequences. The distinct various senses and meanings of a word are not predetermined in any type of mental lexicon, but due to the fact of the generative lexicon, they emerge in the context.

Vectors in Latent Semantic Analysis are context-free, but not the meaning. It is context-dependent. This is a dilemma. The remedy? By combining Latent Semantic Analysis with the Construction-Integration (CI) Model of comprehension, word meanings can be made context sensitive.

Both comments and pings are currently closed.

Comments are closed.