POSTDOC IN NATURAL LANGUAGE PROCESSING SECURITY (NLPSec) at Aalborg Universitet
København, Region Hovedstaden, Denmark -
Full Time


Start Date

Immediate

Expiry Date

08 May, 25

Salary

0.0

Posted On

09 Feb, 25

Experience

4 year(s) or above

Remote Job

No

Telecommute

No

Sponsor Visa

No

Skills

Good communication skills

Industry

Information Technology/IT

Description

Are you interested in the intersection between Large Language Models and Cybersecurity, and working in a rapidly expanding Natural Language Processing (NLP) team? The Department of Computer Science at The Technical Faculty of IT and Design, Aalborg University, invites applications for one or more positions as postdoc from 1 September 2025 or as soon as possible thereafter. The positions are available for a period of three years and is located at our Copenhagen campus.

JOBBESKRIVELSE

The postdoc will be working with the Natural Language Processing(NLP) team in the Data, Knowledge, and Web Engineering(DKW) group, physically located in Copenhagen. The position is anchored in the projectLinguistically Motivated Language Model Security funded by aNovo Nordisk Foundation: Data Science Investigator grant, led byProfessor Johannes Bjerva. TheAAU-NLP teamcurrently comprises a full professor and research leader, an assistant professor, two postdocs, and five PhD students. The project will expand this team by hiring two postdocs(three years each), and two new PhD students. The team is expected to grow substantially over the next few years.
The aim of the project is to conduct cutting-edge research in the emerging research area of NLP Security, in particular concerning Large Language Models(LLM Security). This includes areas such as adversarial attacks, backdoor attacks, embedding inversion attacks, or other security vulnerabilities studied across machine learning. Unlike traditional software, LLMs process natural language inputs from users, which opens a vast array of potential attack vectors. These aspects can typically not be boiled down to a single line of code to be fixed, but comprise complex interactions between AI architectures, training data, prompts, and manipulation thereof. As such, LLM security lies outside of the scope of traditional cybersecurity and requires the development of novel methodology. Given the explorative nature of this project, in a novel research area, a large amount of research freedom is expected.
The project strongly emphasizes international collaborations, meaning that we are planning to host top international researchers in our lab throughout the project, and you will have the opportunity to visit other top research environments during your position. Both academic and industrial exchanges are encouraged.

Together with the project team, you will:

  • Design and implement NLP experiments, focusing on methodological development.
  • Establish a large-scale dataset for NLP Security.
  • Collaborate with researchers in NLP, Cybersecurity, and AI security to produce high-impact research.
  • Publish in top-tier international conferences and journals within NLP, AI, ML, and Security

We’re looking for a candidate with:

  • A PhD in computer science, machine learning, cybersecurity, data science, or a closely related field. Additional training and experience in linguistics is a plus.
  • Expertise in natural language processing, particularly large language models. Experience with multilingual NLP is a plus.
  • A strong publication record in relevant fields, e.g., with publications in venues such as ACL, EMNLP, TACL, AAAI, ICML, ICLR, NeurIPS, S&P, and CCS.
  • Excellent communication skills in English(written and oral).

QUALIFICATION REQUIREMENTS

Appointment as Postdoc presupposes scientific qualifications at PhD–level or similar scientific qualifications. The research potential of each applicant will be emphasized in the overall assessment. Appointment as a Postdoc cannot exceed a period of four years in total at Aalborg University.

Responsibilities

Please refer the Job description for details

Loading...