검색결과

검색조건
좁혀보기
검색필터
결과 내 재검색

간행물

    분야

      발행연도

      -

        검색결과 1

        1.
        2022.12 KCI 등재 구독 인증기관 무료, 개인회원 유료
        Quora search engine redirects to different discussion pages based on the search terms searched by the user. So, when questions that are semantically similar are searched on Quora, it sometimes redirects a user to different discussion pages even if there exists a page to the dedicated search. In such a case, Semantic Similarity among the questions carries highest weightage. So, for text, using traditional methods for calculating similarity, usually the text is considered as sequence of words and they just count the number of words that occurred in a sentence, on which some distance measures are applied to find the similarity, while missing the semantic level knowledge of the text during calculation. Considering such traditional methods, it will also require a huge training set as well as time to produce an accurate model. But in this Research Paper, Siamese Based Network is used that can train itself on a single example of each text to provide an accurate similarity output.I have used different types of pre-trained word embedding models like word-2-vec and glove to understand the semantics of the question pairs present in Quora Question Pair dataset. This paper introduces a new approach to calculate sentence similarity and gives astronishing results outperforming the current state of art Siamese Based LSTM models. Along with this new approach of using Manhattan LSTM with attention mechanism for similarity calculation, a comparative analysis is performed on the embedded question pairs, among different Siamese based LSTM models like LSTM and Manhattan LSTM, to predict whether the questions are similar or not and get the best model combination for Quora Question Pair.
        4,300원