原创

SpringBoot集成Elasticsearch

温馨提示:
本文最后更新于 2024年09月07日,已超过 223 天没有更新。若文章内的图片失效(无法正常加载),请留言反馈或直接联系我

一.简介

Elasticsearch 是一个开源的分布式搜索和分析引擎,基于 Apache Lucene 构建。它提供了近实时的全文搜索能力,并且能够处理各种类型的数据,包括结构化、非结构化、地理空间数据等。Elasticsearch 以其高性能、可扩展性和易用性而闻名,广泛应用于日志分析、全文搜索、商业智能、安全信息与事件管理(SIEM)等领域。

二.安装

1.ubuntu

在Ubuntu上安装Elasticsearch的步骤如下

导入Elasticsearch公钥:

wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add -

添加Elasticsearch到APT源列表:

sudo sh -c 'echo "deb https://artifacts.elastic.co/packages/7.x/apt stable main" > /etc/apt/sources.list.d/elastic-7.x.list'

更新包索引:

sudo apt-get update

安装Elasticsearch:

sudo apt-get install elasticsearch

启动Elasticsearch服务:

sudo systemctl start elasticsearch.service

设置Elasticsearch随系统启动:

sudo systemctl enable elasticsearch.service

验证Elasticsearch是否正在运行:

curl -X GET "localhost:9200/"

2.配置

/etc/elasticsearch/elasticsearch.yml

network.host: 192.168.1.9
discovery.seed_hosts: ["192.168.1.9"]
cluster.initial_master_nodes: ["node-1"]

三.SpringBoot

SpringBoot版本2.7.18,引入相关依赖

<dependency>
    <groupId>org.elasticsearch.client</groupId>
    <artifactId>elasticsearch-rest-high-level-client</artifactId>
    <!-- 请使用与你的 Elasticsearch 版本兼容的版本 -->
    <version>7.17.24</version>
</dependency>
<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-data-elasticsearch</artifactId>
</dependency>

Elasticsearch配置类

@Configuration
public class ElasticsearchConfig {

    @Bean
    public RestHighLevelClient elasticsearchClient() {
        return new RestHighLevelClient(
                RestClient.builder(
                        new HttpHost("192.168.1.9", 9200, "http")
                )
        );
    }
}

Service

@Service
public class NewsService {

    @Autowired
    private RestHighLevelClient client;

    /**
     * 保存新闻
     * @param newsDto
     */
    public void save(NewsDto newsDto) throws IOException {
        Long id = SnowflakeIdWorker.generateId();
        News news = new News(id, newsDto.getTitle(), newsDto.getContent());

        IndexRequest request = new IndexRequest("news")
                .id(id.toString())
                .source(Map.of(
                        "title", news.getTitle(),
                        "content", news.getContent()
                ), XContentType.JSON);

        IndexResponse response = client.index(request, RequestOptions.DEFAULT);
        if (response.getResult().equals("created")) {
            System.out.println("Document created with ID: " + response.getId());
        } else if (response.getResult().equals("updated")) {
            System.out.println("Document updated with ID: " + response.getId());
        }
    }

    /**
     * 查询新闻
     * @param keyword
     * @return
     */
    public List<NewsVo > findNews(String keyword) throws IOException {
        SearchRequest searchRequest = new SearchRequest("news");
        SearchSourceBuilder sourceBuilder = new SearchSourceBuilder()
                .query(QueryBuilders.boolQuery()
                        .should(QueryBuilders.matchQuery("title", keyword))
                        .should(QueryBuilders.matchQuery("content", keyword)));

        searchRequest.source(sourceBuilder);

        SearchResponse searchResponse = client.search(searchRequest, RequestOptions.DEFAULT);

        return extractNewsFromSearchResponse(searchResponse);
    }
    private List<NewsVo> extractNewsFromSearchResponse(SearchResponse searchResponse) {
        return Arrays.stream(searchResponse.getHits().getHits())
                .map(hit -> {
                    Map<String, Object> sourceAsMap = hit.getSourceAsMap();
                    return new NewsVo(
                            (String) sourceAsMap.get("title"),
                            (String) sourceAsMap.get("content")
                    );
                })
                .collect(Collectors.toList());
    }

    /**
     * 多词匹配和高亮查询新闻
     * @param query
     * @return
     */
    public List<NewsVo> findNewsSearch(String query) throws IOException {
        SearchRequest searchRequest = new SearchRequest("news");
        SearchSourceBuilder sourceBuilder = new SearchSourceBuilder()
                .query(QueryBuilders.matchQuery("content", query))
                .highlighter(new HighlightBuilder()
                        .field("content")
                        .preTags("<em>")
                        .postTags("</em>")
                        // 控制每个高亮片段的最大长度
                        .fragmentSize(150)
                        // 控制最多返回多少个高亮片段
                        .numOfFragments(5));

        searchRequest.source(sourceBuilder);

        SearchResponse searchResponse = client.search(searchRequest, RequestOptions.DEFAULT);

        return extractNewsWithHighlightFromSearchResponse(searchResponse);
    }
    private List<NewsVo> extractNewsWithHighlightFromSearchResponse(SearchResponse searchResponse) {
        return Arrays.stream(searchResponse.getHits().getHits())
                .map(hit -> {
                    Map<String, Object> sourceAsMap = hit.getSourceAsMap();
                    String title = (String) sourceAsMap.get("title");
                    String content = (String) sourceAsMap.get("content");

                    // 获取高亮结果
                    Map<String, HighlightField> highlightFields = hit.getHighlightFields();
                    if (highlightFields.containsKey("content")) {
                        HighlightField highlight = highlightFields.get("content");
                        Text[] fragments = highlight.fragments();
                        if (fragments.length > 0) {
                            // 使用第一个高亮片段
                            content = fragments[0].string();
                        }
                    }

                    return new NewsVo(title, content);
                })
                .collect(Collectors.toList());
    }

}

正文到此结束