Seguir
Lukas Schott
Lukas Schott
Bosch Center for AI
Dirección de correo verificada de bethgelab.org
Título
Citado por
Citado por
Año
Towards the first adversarially robust neural network model on MNIST
L Schott, J Rauber, M Bethge, W Brendel
International Conference on Learning Representations 2019, 2018
4622018
Comparative study of deep learning software frameworks
S Bahrampour, N Ramakrishnan, L Schott, M Shah
arXiv preprint arXiv:1511.06435, 2015
2332015
A simple way to make neural networks robust against diverse image corruptions
E Rusak, L Schott, RS Zimmermann, J Bitterwolf, O Bringmann, M Bethge, ...
Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23 …, 2020
2302020
Towards nonlinear disentanglement in natural data with temporal sparse coding
D Klindt, L Schott, Y Sharma, I Ustyuzhaninov, W Brendel, M Bethge, ...
arXiv preprint arXiv:2007.10930, 2020
1472020
Comparative study of caffe, neon, theano, and torch for deep learning
S Bahrampour, N Ramakrishnan, L Schott, M Shah
1432016
Visual representation learning does not generalize strongly within the same domain
L Schott, J Von Kügelgen, F Träuble, P Gehler, C Russell, M Bethge, ...
arXiv preprint arXiv:2107.08221, 2021
742021
Score-based generative classifiers
RS Zimmermann, L Schott, Y Song, BA Dunn, DA Klindt
arXiv preprint arXiv:2110.00473, 2021
702021
Increasing the robustness of dnns against im-age corruptions by playing the game of noise
E Rusak, L Schott, R Zimmermann, J Bitterwolfb, O Bringmann, M Bethge, ...
542020
Learned watershed: End-to-end learning of seeded segmentation
S Wolf, L Schott, U Kothe, F Hamprecht
Proceedings of the IEEE International Conference on Computer Vision, 2011-2019, 2017
542017
Deep learning on symbolic representations for large-scale heterogeneous time-series event prediction
S Zhang, S Bahrampour, N Ramakrishnan, L Schott, M Shah
International Conference on Acoustics, Speech, and Signal Processing (ICASSP), 2016
302016
Understanding neural coding on latent manifolds by sharing features and dividing ensembles
M Bjerke, L Schott, KT Jensen, C Battistin, DA Klindt, BA Dunn
arXiv preprint arXiv:2210.03155, 2022
92022
Towards the first adversarially robust neural network model on mnist. 2019
L Schott, J Rauber, W Brendel, M Bethge
URL https://arxiv. org/pdf/1805.09190. pdf, 2018
82018
Comparative study of Caffe
S Bahrampour, N Ramakrishnan, L Schott, M Shah
Neon, Theano, and Torch for Deep Learning. arXiv 1511, 2015
62015
Comparative study of deep learning software frameworks. arXiv 2015
S Bahrampour, N Ramakrishnan, L Schott, M Shah
arXiv preprint arXiv:1511.06435 3, 0
5
Mind the Gap Between Synthetic and Real: Utilizing Transfer Learning to Probe the Boundaries of Stable Diffusion Generated Data
L Hennicke, CM Adriano, H Giese, JM Koehler, L Schott
arXiv preprint arXiv:2405.03243, 2024
32024
Analytical uncertainty-based loss weighting in multi-task learning
L Kirchdorfer, C Elich, S Kutsche, H Stuckenschmidt, L Schott, JM Köhler
arXiv preprint arXiv:2408.07985, 2024
22024
Challenging Common Assumptions in Multi-task Learning
C Elich, L Kirchdorfer, JM Köhler, L Schott
arXiv preprint arXiv:2311.04698, 2023
22023
METHOD FOR TRAINING A MACHINE LEARNING MODEL
L Schott, JM Koehler, C Blaiotta
US Patent App. 18/774,344, 2025
2025
Device and method for classifying a digital image with an image classifier, for training the image classifier, and for determining an image dataset for the training
L Schott, C Blaiotta
US Patent App. 18/771,370, 2025
2025
Attention Is All You Need For Mixture-of-Depths Routing
A Gadhikar, SK Majumdar, N Popp, P Saranrittichai, M Rapp, L Schott
arXiv preprint arXiv:2412.20875, 2024
2024
El sistema no puede realizar la operación en estos momentos. Inténtalo de nuevo más tarde.
Artículos 1–20