Exploiting the positions and the occurrences of words to incorporate the inter-dependencies between for language modeling as a mean to exploit the long contexts more effectively to improve the modeling capability

Bold – Questions

Color – Answers

  1. What is the motivation of this theses? – Introduction
  2. What is the problem to solve?
  3. What is a LM? – To estimate the probability of a word sequence.
  4. How useful is a LM to different NLP applications? – NLP applications rely on the LMs to hypothesize a linguistically well-formed sentence/paragraph/document under a specific condition – for example speech recognition, machine translation, information retrieval, and handwriting recognition.
  5. What are the classes of LM?– N-gram, distant-bigram, skip-gram, trigger, bag-of-words, and their log-linear/linear interpolation– each of these LMs reduces the word sequence under different assumptions to keep the data scarcity problem manageable.
  6. What are the problems do these LMs address? – To alleviate the data scarcity problem by assuming the word sequence in the history as simpler events.
  7. Whatis the problem does this thesis address?– The inter-dependencies among the events are neglected when incorporating multiple events in the history for words prediction.
  8. How does this problem affect the performance of LMs?– The parameters in the combined model are not consistent to the statistics in the training data.
  9. How was this problem solved traditionally?– By using the bucketing scheme or maximum entropy model to combine events.
  10. How does this thesis address the problem?– By exploiting the events jointly and the LM is decomposed into multiple components in order to alleviate the data scarcity problem while maintain the inter-dependencies.
  11. How the proposed solution is compared the conventional approaches?– Conventional approaches exploit each event separately by using a model and then combine the component models by using linear/log-linear interpolation.
  1. What is the state-of-the-art approaches for LM?
  2. What is a LM? – To estimate the probability of a word sequence.
  3. How a LM is built?– By using the chain rule.
  4. Corpuses for LM? Toolkits?
  5. How are LM applied to different applications? –Speech recognition – machine translation – information retrieval – handwriting recognition.
  6. How a LM is evaluated?– Perplexity – Task specific evaluation, e.g. WER, BLEU, etc.
  7. How was this problem solved conventionally? – Background
  8. What are the problems do these LMs address? – To alleviate the data scarcity problem
  9. What are the classes of LM?– N-gram, distant-bigram, skip-gram, trigger, bag-of-words, and their log-linear/linear interpolation – each of these LMs reduces the word sequence under different assumptions to keep the data scarcity problem manageable – the history.
  10. What are the key concepts & relationships between these classes of LM? – These classes of LM exploit a simpler event derived from the word sequence in the history for words prediction – by dropping words from the word sequence (to form a shorter sub-sequence) or disregarding the arrangements of words.
  11. What are the weaknesses & strengths of these LMs?
  1. What problemstill exist in LM modelling?– The inter-dependencies among the events are neglected when incorporating multiple events in the history for words prediction.
  2. How does this problem affect the performance of LMs?– The parameters in the combined model are not consistent to the statistics in the training data.
  1. Our proposed solution for a problem in LM? –
  2. Whatis the problemwe want to solve?
  3. We again illustrate the problem in even more detail
  4. The inter-dependencies among the events are neglected when incorporating multiple events in the history for words prediction.
  5. How does this thesis address the problem?– By exploiting the events jointly and the LM is decomposed into multiple components in order to alleviate the data scarcity problem while maintain the inter-dependencies.
  6. Provide details of the proposed solution?–
  7. Elaborate the formulation of the proposed solution,
  8. Elaborate how the proposed solution can solve the problem. - Provide figures and graphs to support the description
  9. E.gdecoupling, generalization, smoothing –
  10. How to decompose the LM into multiple component models as according to the events? – Decoupling the history into TD and TO components.
  11. How to compute probabilities based on the component models?– Smoothing – Weighting
  1. How does TDTO relate to other works?
  2. There are different types of event can be exploited, which types of event are considered by this thesis? Why?
  1. How to show that the proposed solution can actually solve the problem?– Compare the proposed solution to the conventional solutions – Highlight other the benefits of the proposed method.
  2. Does the TDTO capture the long contexts more effective than higher order n-gram model?
  3. Does the TDTO capture the long contexts more effective than the bag-of-words model?
  4. Does the TDTO capture the long contexts more effective than the distant bigram model?
  5. Does the TDTO capture the long contexts more effective than the trigger model?
  6. What is the impact on other applications that require LM? By addressing the problem, can an (NLP) application be benefited?– Speech recognition – Document classification – Word prediction
  7. What are the drawback of the proposed solution? – The smoothing of the TD and the TO component models.
  1. An extension of Chapter 3’s solution
  2. Why there is a need to this extension?– Smoothing of the TD and the TO component models have been done naively.
  3. What are the existing solutions for smoothing? – f
  4. Go and describe the various solutions?
  5. The NN method to perform smoothing?
  6. What is the NN LM approaches?
  7. History, success, approaches,
  8. How does NN smooth
  9. Project the input into continuous space which interpolation of vector in neighboring apce for
  10. So what do can we do? – NN has been shown to be useful for smoothing the LM
  11. Why is NN suitable to smooth for TDTO
  12. How to implement NN-TDTO?–Show the architecture of the NN-TDTO.
  13. What is the architecture of the NN-TDTO?
  14. How to encode the inputs of the NN-TDTO?
  15. How to show the extension of the proposed solution is effective?– Show the perplexity results – Show the WER results.
  16. Does the NN-TDTO model have lower perplexity than the TDTO model?
  17. Does the NN-TDTO model have lower WER than the TDTO model?
  18. How is NN-TDTO compared to other NN-based LM?– Show the perplexity results as compared to the NN-based n-gram – Show the perplexity results as compared to the RNNLM.
  19. How is the perplexity of the NN-TDTO compared to the NNLM?
  20. How is the perplexity NN-TDTO compared to the RNNLM?
  21. What are other benefits of NN-TDTO? –
  22. How is the complexity of the NN-TDTO compared to the NNLM?
  23. How is the complexity NN-TDTO compared to the RNNLM?
  1. Conclusion
  2. What has this thesis achieved? –
  3. How effective is the proposed TDTO model?
  4. How is the perplexity?
  5. How is the WER?
  6. What are the directions to further this thesis? –

??? Where should these following question sbe?

  1. How was this problem solved traditionally & how does this thesis address the problem?– By using the bucketing scheme or maximum entropy model – By exploiting the events jointly and the LM is decomposed into multiple components in order to alleviate the data scarcity problem while maintain the inter-dependencies.