Catastrophic Interference in Neural Embedding Models (Dachapally & Jones)

Catastrophic forgetting is the tendancy of neural models to have a strong recency bias e.g. more recent training examples are more likely to be predicted.

DSM

Distributional Semantic Models encompass geometric models like latent dirchlet allocation and svd as well as neural embedding models. Neural embedding models are

Experiment 1

Create artificial data

using the following sentence generation patterns

  • "Man/woman catch/eat trout/bass"
  • "Man/woman play/pluck acoustic/bass"

The idea is to capture the two homophonous meanings of 'bass' and place them in embedding contexts identical to that of a synonym.

Ordering of data

Balancing distribution of homophones

  • Random sampling
  • All 'fish' interpretation s first
  • All 'musical' interpretations first

1/3 of one meaning

Evaluation

Looked at cosine similarity between word embedding vectors learned

Experiment 2

Conducted using real data TASSA corpus