Natural language processing for web browsing analytics: Challenges, lessons learned, and opportunities
EntityUAM. Departamento de Tecnología Electrónica y de las Comunicaciones
10.1016/j.comnet.2021.108357Computer Networks 198 (2021): 108357
Funded byThis research has been partially funded by the Spanish State Research Agency under the project AgileMon (AEI PID2019-104451RBC21) and by the Spanish Ministry of Science, Innovation and Universities under the program for the training of university lecturers (Grant number: FPU19/05678)
ProjectGobierno de España. PID2019-104451RBC21
SubjectsDeep learning; Internet monitoring; Natural language processing; Traffic monetization; Users analytics; Web browsing; Telecomunicaciones
Rights© 2021 The Authors
Esta obra está bajo una licencia de Creative Commons Reconocimiento-NoComercial-SinObraDerivada 4.0 Internacional.
In an Internet arena where the search engines and other digital marketing firms’ revenues peak, other actors still have open opportunities to monetize their users’ data. After the convenient anonymization, aggregation, and agreement, the set of websites users visit may result in exploitable data for ISPs. Uses cover from assessing the scope of advertising campaigns to reinforcing user fidelity among other marketing approaches, as well as security issues. However, sniffers based on HTTP, DNS, TLS or flow features do not suffice for this task. Modern websites are designed for preloading and prefetching some contents in addition to embedding banners, social networks’ links, images, and scripts from other websites. This self-triggered traffic makes it confusing to assess which websites users visited on purpose. Moreover, DNS caches prevent some queries of actively visited websites to be even sent. On this limited input, we propose to handle such domains as words and the sequences of domains as documents. This way, it is possible to identify the visited websites by translating this problem to a text classification context and applying the most promising techniques of the natural language processing and neural networks fields. After applying different representation methods such as TF–IDF, Word2vec, Doc2vec, and custom neural networks in diverse scenarios and with several datasets, we can state websites visited on purpose with accuracy figures over 90%, with peaks close to 100%, being processes that are fully automated and free of any human parametrization
Google Scholar:Perdices Burrero, Daniel - Ramos, Javier - García Dorado, José Luis - González, Iván - López de Vergara Méndez, Jorge Enrique
This item appears in the following Collection(s)
Showing items related by title, author, creator and subject.