In-Sensor Polarimetric Optoelectronic Computing Based on Gate-Tunable 2D ...
Multi-Color Detection of Single Sensor Based on Tellurium Relaxation Char...
Uncooled InAsSb- based high- speed mid- wave infrared barrier detector
High Frequency Mid-Infrared Quantum Cascade Laser Integrated With Grounde...
Multi-function sensing applications based on high Q-factor multi-Fano res...
High-power electrically pumped terahertz topological laser based on a sur...
Van der Waals polarity-engineered 3D integration of 2D complementary logic
Distinguishing the Charge Trapping Centers in CaF2-Based 2D Material MOSFETs
Influence of Growth Process on Suppression of Surface Morphological Defec...
High-Power External Spatial Beam Combining of 7-Channel Quantum Cascade L...
官方微信
友情链接

Generative Pre-Trained Transformer for Symbolic Regression Base In-Context Reinforcement Learning

2024-05-08


Li, Yanjie; Li, Weijun; Yu, Lina; Wu, Min; Liu, Jingyi; Li, Wenqiang; Hao, Meilan; Wei, Shu; Deng, Yusong Source: arXiv, April 9, 2024;

Abstract:

The mathematical formula is the human language to describe nature and is the essence of scientific research. Therefore, finding mathematical formulas from observational data is a major demand of scientific research and a major challenge of artificial intelligence. This area is called symbolic regression. Originally symbolic regression was often formulated as a combinatorial optimization problem and solved using GP or reinforcement learning algorithms. These two kinds of algorithms have strong noise robustness ability and good Versatility. However, inference time usually takes a long time, so the search efficiency is relatively low. Later, based on large-scale pre-training data proposed, such methods use a large number of synthetic data points and expression pairs to train a Generative Pre-Trained Transformer(GPT). Then this GPT can only need to perform one forward propagation to obtain the results, the advantage is that the inference speed is very fast. However, its performance is very dependent on the training data and performs poorly on data outside the training set, which leads to poor noise robustness and Versatility of such methods. So, can we combine the advantages of the above two categories of SR algorithms? In this paper, we propose FormulaGPT, which trains a GPT using massive sparse reward learning histories of reinforcement learning-based SR algorithms as training data. After training, the SR algorithm based on reinforcement learning is distilled into a Transformer. When new test data comes, FormulaGPT can directly generate a "reinforcement learning process" and automatically update the learning policy in context. Tested on more than ten datasets including SRBench, formulaGPT achieves the state-of-the-art performance in fitting ability compared with four baselines. In addition, it achieves satisfactory results in noise robustness, versatility, and inference efficiency.

Copyright © 2024, The Authors. All rights reserved. (42 refs.)




关于我们
下载视频观看
联系方式
通信地址

北京市海淀区清华东路甲35号(林大北路中段) 北京912信箱 (100083)

电话

010-82304210/010-82305052(传真)

E-mail

semi@semi.ac.cn

交通地图
版权所有 中国科学院半导体研究所

备案号:京ICP备05085259-1号 京公网安备110402500052 中国科学院半导体所声明