Estimate Emotion Probability Vectors Using LLMs: Acknowledgements and References

cover
10 May 2024

This paper is available on arxiv under CC 4.0 license.

Authors:

(1) D.Sinclair, Imense Ltd, and email: david@imense.com;

(2) W.T.Pye, Warwick University, and email: willempye@gmail.com.

6. Acknowledgements

The authors acknowledge the extraordinary generosity of Meta in releasing model weights in a reasonable way for their LlaMa2 series of pre-trained Large Language Models.

7. References

[1] Open AI. Chatgpt-4 technical report. 2023. URL https://arxiv.org/pdf/2303.08774.pdf.

[2] Meta GenAI, Thomas Scialom, and Hugo Touvron. Llama 2: Open foundation and fine-tuned chat models. 2023. URL https://arxiv.org/pdf/2307.09288.pdf.

[3] Rosalind W. Picard. Affective computing. MIT Press, 1997.

[4] J Strabismus. The jedi religion: Is love the force? Amazon Kindle, 2013.

[5] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. CoRR, abs/1706.03762, 2017. URL http://arxiv.org/abs/1706.03762.

[6] Wenxuan Zhang, Yue Deng, Bing Liu, Sinno Jialin Pan, and Lidong Bing. Sentiment analysis in the era of large language models: A reality check, 2023.