Seguir
Mohamed Elhoseiny, Ph.D.
Mohamed Elhoseiny, Ph.D.
Associate Professor, KAUST (hiring postdocs & grad students)
Dirección de correo verificada de kaust.edu.sa - Página principal
Título
Citado por
Citado por
Año
Minigpt-4: Enhancing vision-language understanding with advanced large language models
D Zhu, J Chen, X Shen, X Li, M Elhoseiny
arXiv preprint arXiv:2304.10592 (ICLR2024 paper), 2023
17182023
Memory Aware Synapses: Learning what (not) to forget
R Aljundi, F Babiloni, M Elhoseiny, M Rohrbach, T Tuytelaars
European Conference on Computer Vision (ECCV), 2018
17082018
Efficient Lifelong Learning with A-GEM
A Chaudhry, MA Ranzato, M Rohrbach, M Elhoseiny
International Conference on Learning Representations (ICLR), 2019
15122019
Continual learning with tiny episodic memories
A Chaudhry, M Rohrbach, M Elhoseiny, T Ajanthan, P Dokania, P Torr, ...
Workshop on Multi-Task and Lifelong Reinforcement Learning, 2019
898*2019
Social-STGCNN: A Social Spatio-Temporal Graph Convolutional Neural Network for Human Trajectory Prediction
A Mohamed, K Qian, M *Elhoseiny, C *Claudel, * EqualAdvising
2020 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2020
8522020
CAN: Creative Adversarial Networks, Generating" Art" by Learning About Styles and Deviating from Style Norms
A Elgammal, B Liu, M Elhoseiny, M Mazzone
International Conference on Computational Creativity (ICCC), 2017
760*2017
PointNeXt: Revisiting PointNet++ with Improved Training and Scaling Strategies
G Qian, Y Li, H Peng, J Mai, HAAK Hammoud, M Elhoseiny, B Ghanem
Thirty-Sixth Conference on Neural Information Processing Systems. NeurIPS 2022, 2022
5292022
Imagine it for me: Generative Adversarial Approach for Zero-Shot Learning from Noisy Texts
Y Zhu, M Elhoseiny, B Liu, A Elgammal
CVPR, 2018
478*2018
Write a classifier: Zero-shot learning using purely textual descriptions
M Elhoseiny, B Saleh, A Elgammal
Proceedings of the IEEE International Conference on Computer Vision, 2584-2591, 2013
3842013
SPDA-CNN: Unifying Semantic Part Detection and Abstraction for Fine-grained Recognition
H Zhang, T Xu, M Elhoseiny, X Huang, S Zhang, A Elgammal, D Metaxas
CVPR, 2016
3572016
Minigpt-v2: large language model as a unified interface for vision-language multi-task learning
J Chen, D Zhu, X Shen, X Li, Z Liu, P Zhang, R Krishnamoorthi, ...
arXiv preprint arXiv:2310.09478, 2023
3322023
ReferIt3D: Neural Listeners for Fine-Grained 3D Object Identification in Real-World Scenes
P Achlioptas, A Abdelreheem, F Xia, M Elhoseiny, L Guibas
16th European Conference on Computer Vision (ECCV), 2020
2512020
Uncertainty-guided Continual Learning with Bayesian Neural Networks
S Ebrahimi, M Elhoseiny, T Darrell, M Rohrbach
International Conference on Learning Representations (ICLR), 2020
247*2020
StyleGAN-V: A Continuous Video Generator with the Price, Image Quality and Perks of StyleGAN2
I Skorokhodov, S Tulyakov, M Elhoseiny
2022 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2022
2442022
VisualGPT: Data-efficient adaptation of pretrained language models for image captioning
J Chen, H Guo, K Yi, B Li, M Elhoseiny
2022 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2022
2092022
Large-Scale Visual Relationship Understanding
J Zhang, Y Kalantidis, M Rohrbach, M Paluri, A Elgammal, M Elhoseiny
AAAI, 2019
1762019
Adversarial Generation of Continuous Images
I Skorokhodov, S Ignatyev, M Elhoseiny
2021 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2021
1732021
ArtEmis: Affective Language for Visual Art
P Achlioptas, M Ovsjanikov, K Haydarov, M Elhoseiny, L Guibas
2021 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2021
1632021
The Shape of Art History in the Eyes of the Machine
A Elgammal, M Mazzone, B Liu, D Kim, M Elhoseiny
AAAI, 2018
1612018
Link the head to the" beak": Zero shot learning from noisy text description at part precision
M Elhoseiny, Y Zhu, H Zhang, A Elgammal
Proceedings of the IEEE conference on computer vision and pattern …, 2017
1532017
El sistema no puede realizar la operación en estos momentos. Inténtalo de nuevo más tarde.
Artículos 1–20