CentOS7下使用C/C++连接MariaDB/MySQL

前言

连接数据库通常在 Java 的 JDBC 中使用比较多,但是 C/C++ 服务器在 Linux 下操作数据库也比较常见,在网上查了很多资料,并通过自己摸索,终于成功的连接上 MariaDB,记录一下做个参考。
开发环境是阿里云 CentOS7 64 位,使用 yum 安装 MariaDB,安装教程参考另一篇文章

安装locate工具

部分版本的 linux 系统使用 locate 快速查找某文件路径会报以下错误:

-bash: locate: command not found

其原因是没有安装 mlocate 这个包,安装 mlocate:

yum  -y install mlocate

安装完再尝试用 locate 定位内容,发现依然不好使,报了新的错误:

locate: can not stat () `/var/lib/mlocate/mlocate.db': No such file or directory

原因是安装完后没有更新库,更新库:

updatedb

安装完成后,就可以使用 locate 快速查找某文件路径。

C++连接MariaDB

先看一个简单的连接数据库程序:

#include <stdlib.h>
#include <stdio.h>
#include "mysql/mysql.h"

int main(int argc, char *argv[])
{
    MYSQL *conn_ptr;
    conn_ptr = mysql_init(NULL); /* 连接初始化 */
    if (!conn_ptr)
    {
        fprintf(stderr, "mysql_init failed\n");
        return (EXIT_FAILURE);
    }

    conn_ptr = mysql_real_connect(conn_ptr, "XXX.XXX.XXX.XXX", "root", "passwd", "dbname", 0, NULL, 0); /* 建立实际连接 */
    /* 参数分别为:初始化的连接句柄指针,主机名(或者IP),用户名,密码,数据库名,0,NULL,0)后面三个参数在默认安装mysql>的情况下不用改 */
    if (conn_ptr)
    {
        printf("Connection success\n");
    }
    else
    {
        printf("Connection failed\n");
    }

    mysql_close(conn_ptr); /* 关闭连接 */

    return (EXIT_SUCCESS);
}

想要连接 MySQL,必须先添加头文件:#include <mysql/mysql.h>
然后在你的 Makefile 中加入 -I/usr/include/mysql-L/usr/lib64/mysql-lmysqlclient
如果你遇到了其他错误,请看最后的问题解决。
实例
如下是一个链接 MySQL 的 Makefile 实例:

CC=g++	#编译器
CFLAGS=-g	#可以使用gdb调试
BIN=MicroChatServer	#生成的可执行目标文件名
OBJS=sysutil.o	#.c/.cpp文件对应的.o目标文件
LIBS=-I/usr/include/mysql -L/usr/lib64/mysql -lmysqlclient -ljsoncpp -lpthread 	#链接库

$(BIN):$(OBJS)
	$(CC) $(CFLAGS) $^ -o $@
%.o:%.cpp
	$(CC) $(CFLAGS) -c $< -o $@ $(LIBS)

.PHONY:clean	#清理所有目标文件和可执行文件
clean:
	rm -f *.o $(BIN)

然后使用 make,就可以成功运行。

问题解决

问题1
connect1.c:4:19: 错误:mysql.h:没有那个文件或目录
提示是没有找到 mysql.h,产生这个错误的原因是没有 mysql.h 文件,它在 mysql-devel 包中,需要安装这个包:

sudo yum install mysql-devel -y 

然后找一下:

# locate mysql.h
/usr/include/mysql/mysql.h  

这样就可以找到这个头文件了(-I 的含义是在指定位置搜索头文件,参见 man gcc)。
问题2:
再次尝试编译,出现了错误:

# gcc connect1.c -o connect1 -I/usr/include/mysql -lmysqlclient   
/usr/bin/ld: cannot find -lmysqlclient  
collect2: ld 返回 1

链接库有问题,找不到 mysqlclient 链接库,man gcc 发现可以在后面用 -L 指定搜索位置,于是我们先找到 mysqlclient 库的位置:

# locate *mysqlclient*
/usr/lib64/mysql/libmysqlclient.so
/usr/lib64/mysql/libmysqlclient.so.18
/usr/lib64/mysql/libmysqlclient.so.18.0.0
/usr/lib64/mysql/libmysqlclient_r.so

在这里要说明一下,有的系统在 /usr/lib/mysql/ 下,但是我使用的是 CentOS7 64 位的,是在 /usr/lib64/mysql/ 下,这就是为什么要装 mlocate 确定库的位置了,因为很多博客文章并没有说清楚,而是直接说在 /usr/lib/mysql/ 下,导致一些系统无法成功链接并报错。
在找到了位置后,就可以编译了:

gcc connect1.c -o connect1  -I/usr/include/mysql -L/usr/lib64/mysql -lmysqlclient

编译成功,就可以运行了,在此之前,确保 mysqld 已经在运行了:

sudo /etc/rc.d/init.d/mysqld restart  

然后执行生成的可执行文件:

./connect1  
Connection success  

打印出连接成功的信息。说明使用 C/C++ 连接 MySQL 数据库成功。

### Qwen2-7B-Instruct Model Information and Usage #### Overview of the Qwen2-VL-7B-Instruct Model The Qwen2-VL-7B-Instruct model is a large-scale, multi-modal language model designed to handle various natural language processing tasks with enhanced capabilities in understanding visual content. This model has been pre-trained on extensive datasets that include both textual and image data, making it suitable for applications requiring cross-modal reasoning. #### Installation and Setup To use this specific version of the Qwen2 series, one needs first to ensure proper installation by cloning or downloading the necessary files from an accessible repository. Given potential issues accessing certain websites due to geographical restrictions, users should consider using alternative mirrors such as `https://hf-mirror.com` instead of attempting direct access through sites like Hugging Face[^3]. For setting up locally: 1. Install required tools including `huggingface_hub`. 2. Set environment variables appropriately. 3. Execute commands similar to: ```bash huggingface-cli download Qwen/Qwen2-VL-7B-Instruct --local-dir ./Qwen_VL_7B_Instruct ``` This command will fetch all relevant components needed for running inference against the specified variant of the Qwen family models. #### Fine-Tuning Process Fine-tuning allows adapting pretrained weights into more specialized domains without starting training anew. For instance, when working specifically within the context provided earlier regarding Qwen2-VL, adjustments can be made via LoRA (Low-Rank Adaptation), which modifies only parts of existing parameters while keeping others fixed during optimization processes[^1]. #### Running Inference Locally Once everything is set up correctly, performing offline predictions becomes straightforward once dependencies are resolved. An example workflow might involve loading saved checkpoints followed by passing input prompts through them until outputs meet desired criteria[^2]: ```python from transformers import AutoModelForCausalLM, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("./Qwen_VL_7B_Instruct") model = AutoModelForCausalLM.from_pretrained("./Qwen_VL_7B_Instruct") input_text = "Your prompt here" inputs = tokenizer(input_text, return_tensors="pt") outputs = model.generate(**inputs) print(tokenizer.decode(outputs[0], skip_special_tokens=True)) ``` --related questions-- 1. What preprocessing steps must be taken before feeding images alongside text inputs? 2. How does performance compare between different quantization levels offered by GPTQ? 3. Are there any particular hardware requirements recommended for efficient deployment? 4. Can you provide examples where fine-tuned versions outperform general-purpose ones significantly?
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

code_peak

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值