00:00
00:00
Pokmonfanlol

12 Audio Reviews

1 w/ Responses

Generating long and coherent text is an important but challenging task, particularly for open-ended language generation tasks such as story generation. Despite the success in modeling intra-sentence coherence, existing generation models (e.g., BART) still struggle to maintain a coherent event sequence throughout the generated text. We conjecture that this is because of the difficulty for the decoder to capture the high-level semantics and discourse structures in the context beyond token-level co-occurrence. In this paper, we propose a long text generation model, which can represent the prefix sentences at sentence level and discourse level in the decoding process. To this end, we propose two pretraining objectives to learn the representations by predicting inter-sentence semantic similarity and distinguishing between normal and shuffled sentence orders. Extensive experiments show that our model can generate more coherent texts than state-of-the-art baselines.

Comments:ACL 2021 Long PaperSubjects:Computation and Language (cs.CL)Cite as:arXiv:2105.08963 [cs.CL] (or arXiv:2105.08963v1 [cs.CL] for this version) https://doi.org/10.48550/arXiv.2105.08963

Focus to learn more

Submission historyFrom: Jian Guan [view email]
[v1] Wed, 19 May 2021 07:29:08 UTC (5,674 KB)

Download:PDF Other formats

(license)

Current browse context: 

cs.CL

< prev next >

new recent 2105

Change to browse by: 

cs

References & CitationsNASA ADSGoogle ScholarSemantic Scholar

DBLP - CS Bibliography

listing | bibtex

Jian Guan
Changjie Fan
Zitao Liu
Minlie Huang

Export Bibtex Citation

Bookmark

   

Bibliographic Tools

Bibliographic and Citation Tools

Bibliographic Explorer Toggle

Bibliographic Explorer (What is the Explorer?)

Litmaps Toggle

Litmaps (What is Litmaps?)

scite.ai Toggle

scite Smart Citations (What are Smart Citations?)

Generating long and coherent text is an important but challenging task, particularly for open-ended language generation tasks such as story generation. Despite the success in modeling intra-sentence coherence, existing generation models (e.g., BART) still struggle to maintain a coherent event sequence throughout the generated text. We conjecture that this is because of the difficulty for the decoder to capture the high-level semantics and discourse structures in the context beyond token-level co-occurrence. In this paper, we propose a long text generation model, which can represent the prefix sentences at sentence level and discourse level in the decoding process. To this end, we propose two pretraining objectives to learn the representations by predicting inter-sentence semantic similarity and distinguishing between normal and shuffled sentence orders. Extensive experiments show that our model can generate more coherent texts than state-of-the-art baselines.

Comments:ACL 2021 Long PaperSubjects:Computation and Language (cs.CL)Cite as:arXiv:2105.08963 [cs.CL] (or arXiv:2105.08963v1 [cs.CL] for this version) https://doi.org/10.48550/arXiv.2105.08963

Focus to learn more

Submission historyFrom: Jian Guan [view email]
[v1] Wed, 19 May 2021 07:29:08 UTC (5,674 KB)

Download:PDF Other formats

(license)

Current browse context: 

cs.CL

< prev next >

new recent 2105

Change to browse by: 

cs

References & CitationsNASA ADSGoogle ScholarSemantic Scholar

DBLP - CS Bibliography

listing | bibtex

Jian Guan
Changjie Fan
Zitao Liu
Minlie Huang

Export Bibtex Citation

Bookmark

   

Bibliographic Tools

Bibliographic and Citation Tools

Bibliographic Explorer Toggle

Bibliographic Explorer (What is the Explorer?)

Litmaps Toggle

Litmaps (What is Litmaps?)

scite.ai Toggle

scite Smart Citations (What are Smart Citations?)

Generating long and coherent text is an important but challenging task, particularly for open-ended language generation tasks such as story generation. Despite the success in modeling intra-sentence coherence, existing generation models (e.g., BART) still struggle to maintain a coherent event sequence throughout the generated text. We conjecture that this is because of the difficulty for the decoder to capture the high-level semantics and discourse structures in the context beyond token-level co-occurrence. In this paper, we propose a long text generation model, which can represent the prefix sentences at sentence level and discourse level in the decoding process. To this end, we propose two pretraining objectives to learn the representations by predicting inter-sentence semantic similarity and distinguishing between normal and shuffled sentence orders. Extensive experiments show that our model can generate more coherent texts than state-of-the-art baselines.

Comments:ACL 2021 Long PaperSubjects:Computation and Language (cs.CL)Cite as:arXiv:2105.08963 [cs.CL] (or arXiv:2105.08963v1 [cs.CL] for this version) https://doi.org/10.48550/arXiv.2105.08963

Focus to learn more

Submission historyFrom: Jian Guan [view email]
[v1] Wed, 19 May 2021 07:29:08 UTC (5,674 KB)

Download:PDF Other formats

(license)

Current browse context: 

cs.CL

< prev next >

new recent 2105

Change to browse by: 

cs

References & CitationsNASA ADSGoogle ScholarSemantic Scholar

DBLP - CS Bibliography

listing | bibtex

Jian Guan
Changjie Fan
Zitao Liu
Minlie Huang

Export Bibtex Citation

Bookmark

   

Bibliographic Tools

Bibliographic and Citation Tools

Bibliographic Explorer Toggle

Bibliographic Explorer (What is the Explorer?)

Litmaps Toggle

Litmaps (What is Litmaps?)

scite.ai Toggle

scite Smart Citations (What are Smart Citations?)

Generating long and coherent text is an important but challenging task, particularly for open-ended language generation tasks such as story generation. Despite the success in modeling intra-sentence coherence, existing generation models (e.g., BART) still struggle to maintain a coherent event sequence throughout the generated text. We conjecture that this is because of the difficulty for the decoder to capture the high-level semantics and discourse structures in the context beyond token-level co-occurrence. In this paper, we propose a long text generation model, which can represent the prefix sentences at sentence level and discourse level in the decoding process. To this end, we propose two pretraining objectives to learn the representations by predicting inter-sentence semantic similarity and distinguishing between normal and shuffled sentence orders. Extensive experiments show that our model can generate more coherent texts than state-of-the-art baselines.

Comments:ACL 2021 Long PaperSubjects:Computation and Language (cs.CL)Cite as:arXiv:2105.08963 [cs.CL] (or arXiv:2105.08963v1 [cs.CL] for this version) https://doi.org/10.48550/arXiv.2105.08963

Focus to learn more

Submission historyFrom: Jian Guan [view email]
[v1] Wed, 19 May 2021 07:29:08 UTC (5,674 KB)

Download:PDF Other formats

(license)

Current browse context: 

cs.CL

< prev next >

new recent 2105

Change to browse by: 

cs

References & CitationsNASA ADSGoogle ScholarSemantic Scholar

DBLP - CS Bibliography

listing | bibtex

Jian Guan
Changjie Fan
Zitao Liu
Minlie Huang

Export Bibtex Citation

Bookmark

   

Bibliographic Tools

Bibliographic and Citation Tools

Bibliographic Explorer Toggle

Bibliographic Explorer (What is the Explorer?)

Litmaps Toggle

Litmaps (What is Litmaps?)

scite.ai Toggle

scite Smart Citations (What are Smart Citations?)

Generating long and coherent text is an important but challenging task, particularly for open-ended language generation tasks such as story generation. Despite the success in modeling intra-sentence coherence, existing generation models (e.g., BART) still struggle to maintain a coherent event sequence throughout the generated text. We conjecture that this is because of the difficulty for the decoder to capture the high-level semantics and discourse structures in the context beyond token-level co-occurrence. In this paper, we propose a long text generation model, which can represent the prefix sentences at sentence level and discourse level in the decoding process. To this end, we propose two pretraining objectives to learn the representations by predicting inter-sentence semantic similarity and distinguishing between normal and shuffled sentence orders. Extensive experiments show that our model can generate more coherent texts than state-of-the-art baselines.

Comments:ACL 2021 Long PaperSubjects:Computation and Language (cs.CL)Cite as:arXiv:2105.08963 [cs.CL] (or arXiv:2105.08963v1 [cs.CL] for this version) https://doi.org/10.48550/arXiv.2105.08963

Focus to learn more

Submission historyFrom: Jian Guan [view email]
[v1] Wed, 19 May 2021 07:29:08 UTC (5,674 KB)

Download:PDF Other formats

(license)

Current browse context: 

cs.CL

< prev next >

new recent 2105

Change to browse by: 

cs

References & CitationsNASA ADSGoogle ScholarSemantic Scholar

DBLP - CS Bibliography

listing | bibtex

Jian Guan
Changjie Fan
Zitao Liu
Minlie Huang

Export Bibtex Citation

Bookmark

   

Bibliographic Tools

Bibliographic and Citation Tools

Bibliographic Explorer Toggle

Bibliographic Explorer (What is the Explorer?)

Litmaps Toggle

Litmaps (What is Litmaps?)

scite.ai Toggle

scite Smart Citations (What are Smart Citations?)

Generating long and coherent text is an important but challenging task, particularly for open-ended language generation tasks such as story generation. Despite the success in modeling intra-sentence coherence, existing generation models (e.g., BART) still struggle to maintain a coherent event sequence throughout the generated text. We conjecture that this is because of the difficulty for the decoder to capture the high-level semantics and discourse structures in the context beyond token-level co-occurrence. In this paper, we propose a long text generation model, which can represent the prefix sentences at sentence level and discourse level in the decoding process. To this end, we propose two pretraining objectives to learn the representations by predicting inter-sentence semantic similarity and distinguishing between normal and shuffled sentence orders. Extensive experiments show that our model can generate more coherent texts than state-of-the-art baselines.

Comments:ACL 2021 Long PaperSubjects:Computation and Language (cs.CL)Cite as:arXiv:2105.08963 [cs.CL] (or arXiv:2105.08963v1 [cs.CL] for this version) https://doi.org/10.48550/arXiv.2105.08963

Focus to learn more

Submission historyFrom: Jian Guan [view email]
[v1] Wed, 19 May 2021 07:29:08 UTC (5,674 KB)

Download:PDF Other formats

(license)

Current browse context: 

cs.CL

< prev next >

new recent 2105

Change to browse by: 

cs

References & CitationsNASA ADSGoogle ScholarSemantic Scholar

DBLP - CS Bibliography

listing | bibtex

Jian Guan
Changjie Fan
Zitao Liu
Minlie Huang

Export Bibtex Citation

Bookmark

   

Bibliographic Tools

Bibliographic and Citation Tools

Bibliographic Explorer Toggle

Bibliographic Explorer (What is the Explorer?)

Litmaps Toggle

Litmaps (What is Litmaps?)

scite.ai Toggle

scite Smart Citations (What are Smart Citations?)

Generating long and coherent text is an important but challenging task, particularly for open-ended language generation tasks such as story generation. Despite the success in modeling intra-sentence coherence, existing generation models (e.g., BART) still struggle to maintain a coherent event sequence throughout the generated text. We conjecture that this is because of the difficulty for the decoder to capture the high-level semantics and discourse structures in the context beyond token-level co-occurrence. In this paper, we propose a long text generation model, which can represent the prefix sentences at sentence level and discourse level in the decoding process. To this end, we propose two pretraining objectives to learn the representations by predicting inter-sentence semantic similarity and distinguishing between normal and shuffled sentence orders. Extensive experiments show that our model can generate more coherent texts than state-of-the-art baselines.

Comments:ACL 2021 Long PaperSubjects:Computation and Language (cs.CL)Cite as:arXiv:2105.08963 [cs.CL] (or arXiv:2105.08963v1 [cs.CL] for this version) https://doi.org/10.48550/arXiv.2105.08963

Focus to lear

Perfection

Forget cataclysm, bloodbath, aftermath and bloodlust. Enjoy the song!

How to revolutionize an entire video game

Perfect

Cool

Underated

This is so good I can't believe this is real

If you sing "never gonna give you up" from 0:13 it matches (kinda)

I like pokémon

Joined on 8/22/22

Level:
2
Exp Points:
40 / 50
Exp Rank:
> 100,000
Vote Power:
2.55 votes
Rank:
Civilian
Global Rank:
> 100,000
Blams:
0
Saves:
0
B/P Bonus:
0%
Whistle:
Normal