Seguir
Pranav Shyam
Pranav Shyam
OpenAI
Dirección de correo verificada de openai.com
Título
Citado por
Citado por
Año
Language Models are Few-shot Learners
TB Brown, B Mann, N Ryder, M Subbiah, J Kaplan, P Dhariwal, ...
arXiv preprint arXiv:2005.14165, 2020
29744*2020
Glide: Towards photorealistic image generation and editing with text-guided diffusion models
A Nichol, P Dhariwal, A Ramesh, P Shyam, P Mishkin, B McGrew, ...
arXiv preprint arXiv:2112.10741, 2021
20622021
Gpt-4 technical report
J Achiam, S Adler, S Agarwal, L Ahmad, I Akkaya, FL Aleman, D Almeida, ...
arXiv preprint arXiv:2303.08774, 2023
6492023
Gemini: a family of highly capable multimodal models
G Team, R Anil, S Borgeaud, Y Wu, JB Alayrac, J Yu, R Soricut, ...
arXiv preprint arXiv:2312.11805, 2023
4202023
Text and code embeddings by contrastive pre-training
A Neelakantan, T Xu, R Puri, A Radford, JM Han, J Tworek, Q Yuan, ...
arXiv preprint arXiv:2201.10005, 2022
2172022
Model-Based Active Exploration
P Shyam, W Jaśkowski, F Gomez
International Conference on Machine Learning (ICML), 2019, 2018
1992018
Attentive Recurrent Comparators
P Shyam, S Gupta, A Dukkipati
International Conference on Machine Learning (ICML), 2017, 3173-3181, 2017
1462017
Language Models are Few-Shot Learners. 2020. doi: 10.48550
TB Brown, B Mann, N Ryder, M Subbiah, J Kaplan, P Dhariwal, ...
arxiv, 5-7, 2005
1422005
Language models are few-shot learners. arXiv
TB Brown, B Mann, N Ryder, M Subbiah, J Kaplan, P Dhariwal, ...
Computer Science, Computation and Language, 2005
1362005
Training agents using upside-down reinforcement learning
RK Srivastava, P Shyam, F Mutz, W Jaśkowski, J Schmidhuber
arXiv preprint arXiv:1912.02877, 2019
1142019
Language models are few-shot learners. CoRR abs/2005.14165 (2020)
TB Brown, B Mann, N Ryder, M Subbiah, J Kaplan, P Dhariwal, ...
URL: https://arxiv. org/abs/2005.14165, 2005
702005
Artificial Intelligence for Prosthetics - Challenge Solutions
Ł Kidziński, C Ong, SP Mohanty, J Hicks, SF Carroll, B Zhou, H Zeng, ...
arXiv preprint arXiv:1902.02441, 2019
432019
Unsupervised neural machine translation with generative language models only
JM Han, I Babuschkin, H Edwards, A Neelakantan, T Xu, S Polu, A Ray, ...
arXiv preprint arXiv:2110.05448, 2021
232021
Language models are few-shot learners.[Cs]
TB Brown, B Mann, N Ryder, M Subbiah, J Kaplan, P Dhariwal, ...
Proceedings of 2020 Neural Information Processing Systems, 2020
132020
Gemini 1.5: Unlocking multimodal understanding across millions of tokens of context
M Reid, N Savinov, D Teplyashin, D Lepikhin, T Lillicrap, J Alayrac, ...
arXiv preprint arXiv:2403.05530, 2024
32024
El sistema no puede realizar la operación en estos momentos. Inténtalo de nuevo más tarde.
Artículos 1–15