LLM-jp Meeting

Join

Join us if our goals resonate with you!
We have the policy to keep all discussions, processes, results, and failures open in a transparent manner.

Participating university research labs and corporate teams are grouped together with one representative for each group. The leader or representative (laboratory PI, etc.) has to manage the participation of its members.

* If you apply for participation, please note that you must comply with the anti-harassment policy and do not engage in any behavior that interferes with the activities of the study group. If the steering committee determines that such behavior has occurred, you may be asked to withdraw from membership. We will assume that you have agreed to this.

Meeting Schedule

Please see the following schedule for sessions.

2025-2026 Sessions

February 24 (Tue.)
14:30-18:00
● Ended
March 17 (Tue.)
※Schedule Update
14:30-18:00
● Ended

2026-2027 Sessions

April 21 (Tue.)
14:30-18:00
● Upcoming
May 19 (Tue.)
14:30-18:00
● Upcoming
June 16 (Tue.)
14:30-18:00
● Upcoming
July 21 (Tue.)
14:30-18:00
● Upcoming
September 15 (Tue.)
14:30-18:00
● Upcoming
October 20 (Tue.)
14:30-18:00
● Upcoming
November 17 (Tue.)
14:30-18:00
● Upcoming
December 15 (Tue.)
14:30-18:00
● Upcoming
January 19 (Tue.)
14:30-18:00
● Upcoming
February 16 (Tue.)
14:30-18:00
● Upcoming
March 16 (Tue.)
14:30-18:00
● Upcoming

Our Goals

Background and Key Issues

  • Rapid advances in large language models (LLMs) are triggering a technological “phase transition” that is beginning to drive major societal transformation. Supported by their high capability and generality, these technologies are expected to impact all industries and become a fundamental infrastructure for a wide range of scientific and technological research.
  • To foster healthy experimentation and stimulate innovation in this rapidly evolving field, several challenges must be addressed:
    • Technical challenges related to LLMs: mathematical understanding of learning principles (e.g., how emergence and general capabilities arise), as well as improvements in efficiency such as data efficiency, model efficiency, and environmentally sustainable (“green”) AI.
    • Societal challenges related to LLMs: explainability and interpretability (the black-box problem); fairness (bias issues); safety (misinformation and hallucinations, personal data protection, copyright, and compliance); and reliability (how these systems can be assured and trusted).
    • Expansion of LLM applications across domains: applications in areas such as healthcare, law, and education, as well as integration with multimodal information processing and robotic control.
  • Addressing these issues requires the continuous development of fully open models that can also be used commercially, together with ongoing research and development aimed at solving these challenges. From the perspective of economic security, it is also essential that such models sufficiently cover Japanese-language information and allow clear control over usage policies and the confidentiality of input data.
  • Currently, only a limited number of models are fully open, including their training data. At the same time, while large-scale investments in generative AI are being made overseas, Japan is significantly lagging behind.

Future Directions

  • Research and development in large language models (LLMs) is advancing at a rapid pace. In Japan as well, it is necessary to build models at a scale that enables the resolution of the challenges described above, while keeping pace with the latest global developments. At the same time, continued efforts are required to deepen our understanding of the underlying principles and to ensure transparency and reliability.
  • Given the rapid progress in LLM research, a strictly top-down design of research and development programs is not necessarily optimal. What is most important is to cultivate the necessary ecosystem—namely, to establish computational infrastructure and language model development platforms (including human resources such as engineers) and to create an environment in which researchers in Japan and abroad can engage in diverse experimentation and exploration.
  • Since 2024, these initiatives have been supported by the MEXT project “R&D Hub Aimed at Ensuring Transparency and Reliability of Generative AI Models,” under which we have been advancing the efforts described above.