- 30 位贡献者
Fix typos: Prediction -> Precision
fix: Solving the problem of fine-tuning Bert and DistilBert This pull request enables us to fine-tune Bert and DistilBert on the Clone-detection-POJ-104 task. Solving IndexError: tuple index out of range. Key Improvements and Changes Include: 1. **Changed the model config in run.py**: To finish this task, we should use BertModel and DistilModel instead of BertForMaskedLM and DistilBertForMaskedLM. This task does not involve filling in missing tokens in text, which is the primary purpose of the Masked Language Modeling task (BertForMaskedLM and DistilBertForMaskedLM). Instead, the task involves returning Top K codes with the same semantic as the input code, which requires the model to have a good understanding of the semantic relationship between different pieces of code. BertModel and DistilBertModel are trained on a broader range of tasks, including tasks that require understanding of semantic relationships between different pieces of text, such as the Sentence Similarity task. Therefore, BertModel and DistilBertModel are better suited for this task of returning codes with similar semantics. 2. **Resolved the issue of accessing output elements out of bounds**: For DistilBert fine-tuning, the original model.py has some issues. Specifically, the code attempts to access the second output element of the model by using the following line in model.py: ```python outputs = self.encoder(input_ids, attention_mask=input_ids.ne(1))[1] ``` This works for the Bert and CodeBert models, which have a pooled_output concept in their outputs. However, since DistilBERT was not pre-trained on the Next Sentence Prediction task, its output does not include the pooled_output concept. Instead, its output only includes hidden states and attention distributions. To make the code compatible with DistilBert, we can make the following modifications: first, we obtain all of the encoder outputs. Then, if the output length is greater than 1 (for Bert or CodeBert), we use the second output (pooled_output). Otherwise, if we are using DistilBert, we take the first token ([CLS]) of the first output (sequence_output) as the representation of the entire sequence (since the Transformer's self-attention mechanism allows the [CLS] token's vector to capture information about the entire sequence, although this is not equivalent to the pooled_output of BERT, it is a common practice). ```python outputs=self.encoder(input_ids,attention_mask=input_ids.ne(1)) if len(outputs) > 1: outputs = outputs[1] else: outputs = outputs[0][:, 0, :] ``` These fixes aim to solve the problem of fine-tuning Bert and DistilBert.
update readme
update readme
update inference in line completion
fix a corner case
feat: update default hyperparameters and early stopping - Change the default value of `--dropout_probability` to 0, disabled by default - Replace `--min_delta` with `--min_loss_delta` and maintain the default value of 0.001 - Remove the default value of `--early_stopping_patience` and change it to None, disabled by default
add method generation task
Bump torch from 1.8.1 to 1.13.1 in /Code-Code/TypePrediction-TypeScript Bumps [torch](https://github.com/pytorch/pytorch) from 1.8.1 to 1.13.1. - [Release notes](https://github.com/pytorch/pytorch/releases) - [Changelog](https://github.com/pytorch/pytorch/blob/main/RELEASE.md) - [Commits](https://github.com/pytorch/pytorch/compare/v1.8.1...v1.13.1) --- updated-dependencies: - dependency-name: torch dependency-type: direct:production ... Signed-off-by: dependabot[bot] <support@github.com>
Delete run-bug2ref_small.sh
fix one C# linq statement error in code-to-code