Computer Graphics
TU Braunschweig

Seminar Computergraphik SS'24

Sascha Fricke

Hörerkreis: Bachelor & Master

Modul: INF-STD-66, INF-STD-68
Vst.Nr.: 4216012, 4216021

Topic: Current Research in Computer Graphics


In the Computer Graphics Seminar we discuss current research results in the field of Computer Graphics. The tasks of the participants are to write up a research report, to review the work of another student in writing, and to later revise and improve their own report to reflect the input gathered from the review. Finally, at the end of the semester and during a block-seminar, each student will give an oral presentation on their respective research reports. This must also be rehearsed beforehand in front of the assigned individual supervisor and his/her suggestions for improvement must be integrated.


The course is aimed at Bachelor's and Master's students in Computer Science (Informatik), IST, and Business Informatics (Wirtschaftsinformatik), as well as students pursuing their Master in Data Science.

The registration takes place centrally via StudIP. The number of participants is limited to 6 students.

Important Dates

All dates listed here must be adhered to. Attendance at all events is mandatory.

  • 01.02.2023 to 07.02.2024: Registration via Stud.IP
  • Until 13.03.2024: Submission of topic requests
  • 03.04.2024, 13:15.: Kick-Off Meeting (G41b, ICG) [Slides]
  • 16.04.2024: End of the deregistration period
  • 05.05.2024: Submission of the written paper
  • 18.05.2024: Submission of the review report
  • 09.06.2024: Submission of the revised paper
  • Until 21.06.2024: Trial presentation
  • 27.06.2024: Submission of the presentation slides
  • 28.06.2024, 09:00 - 12:00: Presentations - Block Event

Registered students have the possibility to deregister until 2 weeks after the start of the lectures (i.e., 16.04.24) at the latest. For a successful deregistration, it is necessary to deregister via e-mail with the seminar supervisor (

Registered students, and students on the waiting list, have the possibility to send their top 3 topic requests in order of preference via email to until the 13.03.24, so that they will be considered for the topic assignment. In the mail, it must also be stated how many students are participating in the seminar in the current study program.

Once a topic has been assigned to the student, all consequent submissions have to be sent by mail to the respective advisor and additionally to If not communicated otherwise, the deadline for all submissions is at 23:59 on the due day.

The respective drop-offs are done by email to and if necessary by email to the respective advisor.

If you have any questions about the course, please contact


  • The final assignation of topics will be communicated during the Kick-Off event.
  • For each topic, the student needs to prepare a report in latex using the ICG Template.
    The content of the report is a short summary of the work in one's own words and the elaboration of the main points, with a minimum length of 8 pages. The report should clearly reflect that the topic has been understood and was critically assessed.
  • Each participant will later write a 1-2 page review on the report of another student (assigned by the seminar's supervisor). For writing the review one should pay particular attention to the comprehensibility and linguistic style of the summary.
  • After receiving the review on one's own paper, the student will need to revise and improve their manuscript according to the received feedback.
  • For the final presentations, the students can either use their own laptops or one provided by the Institute. If the student needs to use the ICG laptop, they need to contact in time, at least two weeks before the presentations. .
  • The topics will be presented in approximately 20 minute presentations followed by a discussion.
  • The language for the presentations can be either German or English.
  • The oral presentation, the written paper, and the preparation of the review report, are all mandatory requirements to pass the course successfully.


Files and Templates


  1. Perceptual error optimization for Monte Carlo animation rendering
    Miša Korać, Corentin Salaün, Iliyan Georgiev, Pascal Grittmann, Philipp Slusallek, Karol Myszkowski, and Gurprit Singh
    SIGGRAPH Asia 2023
    Advisor: Colin Groth

    The paper presents a new method for optimizing perceptual error in Monte Carlo animation rendering. It extends previous approaches by considering both spatial and temporal aspects, leading to a more accurate and visually pleasing distribution of errors.

  2. ReconFusion: 3D Reconstruction with Diffusion Priors
    Rundi Wu, Ben Mildenhall, Philipp Henzler, Keunhong Park, Ruiqi Gao, Daniel Watson, Pratul P. Srinivasan, Dor Verbin, Jonathan T. Barron, Ben Poole, and Aleksander Holynski
    CVPR 2024 (To Appear)
    Advisor: Florian Hahlbohm

    This paper shows how integrating a diffusion prior into NeRF-based approaches for novel view synthesis can improve reconstruction quality for a given scene when there are only few input images.

  3. Data-driven Pixel Filter Aware MIP Maps for SVBRDFs
    Pauli Kemppinen, Miika Aittala and Jaakko Lehtinen
    EGSR 2023
    Advisor: Sascha Fricke

    In Mip-Map Pyriamiden von Normalmaps hat man oft das Problem, dass auf niedrigeren Auflösungen der Normalmap Details verloren gehen, die eigentlich Teil der Rauheit der Oberfläche werden müssten. In dieser Arbeit wird ein Verfahren entwickelt, dass Daten-gestützt eine Mip-Map Pyramide für Texturen erzeugt, die diese und andere visuelle Materialeigenschaften über die ganze Pyramide versucht beizubehalten. Desweiteren erlaubt deren Verfahren auch, diese Materialeigenschaften zwischen verschieden Materialmodellen und Parametrisierungen (z.B. GGX und Beckmann) zu konvertieren.

  4. HyperReel: High-Fidelity 6-DoF Video with Ray-Conditioned Sampling
    Benjamin Attal, Jia-Bin Huang, Christian Richardt, Michael Zollhöfer, Johannes Kopf, Matthew O'Toole, Changil Kim
    CVPR 2023
    Advisor: Moritz Kappel

    Das Paper beschreibt ein  Verfahren für die volumetrische Rekonstruktion und das anschließende Rendering dynamischer Szenen aus einer Menge von Eingabebildern.

    Ein  dediziertes strahlenbasiertes Sampling-Netzwerk und eine kompakte 4D Szenenrepräsentation ermöglichen dabei schnelle Framerates , hohe Auflösungen und eine verbesserte Speichereffizienz.

  5. Who is Speaking Actually? Robust and Versatile Speaker Traceability for Voice Conversion
    Yanzhen Ren, Hongcheng Zhu, Liming Zhai, Zongkun Sun, Rubing Shen, Lina Wang
    ACM International Conference on Multimedia 2023
    Advisor: JP Tauscher

    Dieses Paper beschäftigt sich mit der Entwicklung eines neuen Frameworks “VoxTracer” für Voice Conversion (VC). Diese Technik zur Stimmumwandlung wird immer häufiger, wirft aber auch Bedenken auf hinsichtlich ihres Missbrauchsprotentials. VoxTracer zielt darauf ab, die Rückverfolgbarkeit des Sprechers zu verbessern, indem es eine einzigartige Sprecheridentität in die Stimmumwandlungen einbettet. Dieser Prozess ähnelt dem Audio-Watermarking, geht aber darüber hinaus. VoxTracer integriert die Identität des Sprechers auf eine Weise in die Voice Conversion, die nicht wahrnehmbar ist, aber genau zurückverfolgt werden kann, selbst wenn die Sprachqualität stark beeinträchtigt ist. Das Framework ist vielseitig und funktioniert mit verschiedenen VC-Methoden, Audiokodierungsstandards, Audiokompressionen und Bitraten.

  6. DiffSwap: High-Fidelity and Controllable Face Swapping via 3D-Aware Masked Diffusion

    Wenliang Zhao, Yongming Rao, Weikang Shi, Zuyan Liu, Jie Zhou, and Jiwen Lu
    CVPR 2023
    Advisor: Susana Castillo
    Achtung: Betreuung, Ausarbeitung und Talk auf Englisch!

    Face swapping is a popular type of replacement deepfake aiming to transfer the identity of the source face to a target image or a video frame while keeping the target's attributes (e.g., pose, expression, background) unchanged. Differently to previous work which rely on network architectures and loss functions to fuse information from source and target faces, this paper reformulates face swapping as a conditional inpainting task. By using a diffusion model guided by the desired face attributes, DiffSwap allows for high-fidelity, controllable, and customizable results which preserve the shape of the source face.


Colin Groth

Florian Hahlbohm

Sascha Fricke

Moritz Kappel

JP Tauscher

Susana Castillo