The present invention discloses a system for providing native machine learning service for user adaptation on a mobile application and portable computing device and method thereof. The system includes, but not limited to, at least one processing unit; a non-transitory computer-readable media storing instructions which, when executed by the one or more processors, cause: an input device for receiving data related to a plurality of features wherein the received data comprises data related to called parties of mobile calls originated by the mobile platform. Further, the processing unit is configured for determining at least one feature in the plurality of features based on the received data, and determining by the computation server, when to classify the feature data into one of categories and labels from a predefined set, using a measure of certainty, by performing certainty calculations at a plurality of time instances during the conversation.
Application ID | 202211023197 |
Invention Field | COMPUTER SCIENCE |
Date of Application | 2022-04-20 |
Publication Number | 16/2022 |
Type | Published |
Name | Address | Country | Natinality |
---|---|---|---|
Dr. Aanjey Mani Tripathi | Associate Professor, School of Computing Science and Engineering, Galgotias University, Greater Noida | India | India |
Heena Khera | Assistant Professor, School of Computing Science and Engineering, Galgotias University, Greater Noida | India | India |
Vishakha Chauhan | Research Scholar, Department of Computer Science and Engineering, SRM IST, Modinagar campus, Delhi NCR | India | India |
Vinod kumar | Assistant Professor, Department of Computing Application, ABES Engineering College, Ghaziabad | India | India |
Arunendra Mani Tripathi | Assistant Professor, School of Computing Science and Engineering, Galgotias University, Greater Noida | India | India |
Shalini | Research Scholar, Department of Science and Technology, Faculty of Education and Methodology, Jayoti Vidyapeeth Women's University, Rajasthan | India | India |
Surendra Singh Chauhan | Assistant Professor, School of Computer Science and Engineering, Galgotias University, Greater Noida | India | India |
Name | Address | Country | Natinality |
---|---|---|---|
Dr. Aanjey Mani Tripathi | Associate Professor, School of Computing Science and Engineering, Galgotias University, Greater Noida | India | India |
Heena Khera | Assistant Professor, School of Computing Science and Engineering, Galgotias University, Greater Noida | India | India |
Vishakha Chauhan | Research Scholar, Department of Computer Science and Engineering, SRM IST, Modinagar campus, Delhi NCR | India | India |
Vinod kumar | Assistant Professor, Department of Computing Application, ABES Engineering College, Ghaziabad | India | India |
Arunendra Mani Tripathi | Assistant Professor, School of Computing Science and Engineering, Galgotias University, Greater Noida | India | India |
Shalini | Research Scholar, Department of Science and Technology, Faculty of Education and Methodology, Jayoti Vidyapeeth Women's University, Rajasthan | India | India |
Surendra Singh Chauhan | Assistant Professor, School of Computer Science and Engineering, Galgotias University, Greater Noida | India | India |
[001] The present invention relates to the field of the system, apparatus and method for user adaptation on a mobile application and portable computing device and techniques thereof. The invention more particularly relates to a system for providing native machine learning service for user adaptation on a mobile application and portable computing device and method thereof.
BACKGROUND OF THE INVENTION
[002] The following description provides the information that may be useful in understanding the present invention. It is not an admission that any of the information provided herein is prior art or relevant to the presently claimed invention, or that any publication specifically or implicitly referenced is prior art.
[003] Further, the approaches described in this section are approaches that could be pursued, but not necessarily approaches that have been previously conceived or pursued. Therefore, unless otherwise indicated, it should not be assumed that any of the approaches described in this section qualify as prior art merely by virtue of their inclusion in this section.
[004] Mobile devices are pervasive in modern communication networks. Many of these mobile devices are capable of running one or more applications while acting as a communication device. The applications and/or the smart phone itself can have a number of settings subject to user control, such as, but not limited to, volume settings, network addresses/names, contact data, and calendar information. Further, the user can change some or all of these settings based on their context, such as location and activity. For example, but not limited to, the user can turn down a ringing volume and/or mute a ringer, prior to watching a movie at a movie theater. After the movie theater completes, the user can the turn up the ringing volume and/or un-mute the ringer.
[005] Accordingly, on the basis of aforesaid facts, there remains a need in the prior art to provide a system for providing native machine learning service for user adaptation on a mobile application and portable computing device and method thereof. The proposed system overcomes the problem and provides a context-identification system executing on a mobile platform. Therefore, it would be useful and desirable to have a system, method, apparatus and interface to meet the above-mentioned needs.
SUMMARY OF THE PRESENT INVENTION
[006] In view of the foregoing disadvantages inherent in the known types of conventional user adaptation system and method on a mobile application and/or portable computing devices, are now present in the prior art, the present invention provides a system for providing native machine learning service for user adaptation on a mobile application and portable computing device and method thereof, which has all the advantages of the prior art and none of the disadvantages. The object of the present invention is to avoid the above-mentioned problems and create a unified system for providing the better and efficient context identified services, that can predict a communicative action associated with the mobile platform by performing a machine-learning operation on the received data.
[007] The main aspect of the present invention is to provide a system comprises, but not limited to, a machine-learning service executing on a mobile platform receives feature-related data. Further, the feature-related data can be for example volume-related data about one or more volume-related settings for the mobile platform and platform-related data received from the mobile platform. The volume-related data and the platform-related data shall be differed, wherein the machine-learning service is configured for determine whether the machine-learning service is trained to perform machine-learning operations related to predicting a change in the one or more volume-related settings for the mobile platform. Further, the mobile platform is provided through a network, which can have any electronic communication medium or hub which facilitates communications between two or more entities, including but not limited to an internet, an intranet, a local area connection, a cloud-based connection, a wireless connection, a radio connection, a physical electronic bus, or any other medium over which digital and electronic information may be sent and received.
[008] Another aspect of the present invention, in which the machine learning interface is configured to provide machine learning and adaptation service API, machine learning and adaptation engine, data aggregation and representation engine, service manager, and machine learning and adaptation service network support.
[009] The proposed system and method is implemented on the processing unit functioning with, but not limited to, the Field Programmable Gate Arrays (FPGAs) and the like, PC, Microcontroller and with other known processors to have computer algorithms and instruction up gradation for supporting many applications domain where the aforesaid problems to solution is required.
[010] In this respect, before explaining at least one object of the invention in detail, it is to be understood that the invention is not limited in its application to the details of set of rules and to the arrangements of the various models set forth in the following description or illustrated in the drawings. The invention is capable of other objects and of being practiced and carried out in various ways, according to the need of that industry. Also, it is to be understood that the phraseology and terminology employed herein are for the purpose of description and should not be regarded as limiting.
[011] These together with other objects of the invention, along with the various features of novelty which characterize the invention, are pointed out with particularity in the disclosure. For a better understanding of the invention, its operating advantages and the specific objects attained by its uses, reference should be made to the accompanying drawings and descriptive matter in which there are illustrated preferred embodiments of the invention.
BRIEF DESCRIPTION OF THE DRAWINGS
[012] The invention will be better understood and objects other than those set forth above will become apparent when consideration is given to the following detailed description thereof. Such description makes reference to the annexed drawings wherein:
[013] FIG. 1, illustrates a schematic diagram of a system for providing native machine learning service for user adaptation on a mobile application and portable computing device and method thereof, in accordance with an embodiment of the present invention; and
[014] FIG. 2, illustrates another block diagram of the system for providing native machine learning service for user adaptation on a mobile application and portable computing device and method thereof, in accordance with an embodiment of the present invention.
DETAILED DESCRIPTION OF THE INVENTION
[015] While the present invention is described herein by way of example using embodiments and illustrative drawings, those skilled in the art will recognize that the invention is not limited to the embodiments of drawing or drawings described and are not intended to represent the scale of the various components. Further, some components that may form a part of the invention may not be illustrated in certain figures, for ease of illustration, and such omissions do not limit the embodiments outlined in any way. It should be understood that the drawings and detailed description thereto are not intended to limit the invention to the particular form disclosed, but on the contrary, the invention is to cover all modifications, equivalents, and alternatives falling within the scope of the present invention as defined by the appended claims. As used throughout this description, the word "may" is used in a permissive sense (i.e. meaning having the potential to), rather than the mandatory sense, (i.e. meaning must). Further, the words "a" or "an" mean "at least one” and the word “plurality” means “one or more” unless otherwise mentioned. Furthermore, the terminology and phraseology used herein is solely used for descriptive purposes and should not be construed as limiting in scope. Language such as "including," "comprising," "having," "containing," or "involving," and variations thereof, is intended to be broad and encompass the subject matter listed thereafter, equivalents, and additional subject matter not recited, and is not intended to exclude other additives, components, integers or steps. Likewise, the term "comprising" is considered synonymous with the terms "including" or "containing" for applicable legal purposes. Any discussion of documents, acts, materials, devices, articles and the like is included in the specification solely for the purpose of providing a context for the present invention. It is not suggested or represented that any or all of these matters form part of the prior art base or were common general knowledge in the field relevant to the present invention.
[016] In this disclosure, whenever a composition or an element or a group of elements is preceded with the transitional phrase “comprising”, it is understood that we also contemplate the same composition, element or group of elements with transitional phrases “consisting of”, “consisting”, “selected from the group of consisting of, “including”, or “is” preceding the recitation of the composition, element or group of elements and vice versa.
[017] The present invention is described hereinafter by various embodiments with reference to the accompanying drawings, wherein reference numerals used in the accompanying drawing correspond to the like elements throughout the description. This invention may, however, be embodied in many different forms and should not be construed as limited to the embodiment set forth herein. Rather, the embodiment is provided so that this disclosure will be thorough and complete and will fully convey the scope of the invention to those skilled in the art. In the following detailed description, numeric values and ranges are provided for various aspects of the implementations described. These values and ranges are to be treated as examples only and are not intended to limit the scope of the claims. In addition, a number of materials are identified as suitable for various facets of the implementations. These materials are to be treated as exemplary and are not intended to limit the scope of the invention.
[018] Referring now to the drawings, these are illustrated in FIG. 1 & 2, the present invention discloses a system for providing native machine learning service for user adaptation on a mobile application and portable computing device and method thereof. The system is comprised of, but not limited to, at least one processing unit; non-transitory computer-readable media storing instructions which, when executed by the one or more processors, cause: an input device for receiving data related to a plurality of features wherein the received data comprises data related to called parties of mobile calls originated by the mobile platform.
[019] In accordance with another embodiment of the present invention, the processing units is configured for determining at least one feature in the plurality of features based on the received data.
[020] In accordance with another embodiment of the present invention, the processing is configured for determining by the computation server, when to classify the feature data into one of categories and labels from a predefined set, using a measure of certainty, by performing certainty calculations at a plurality of time instances during the conversation.
[021] In accordance with another embodiment of the present invention, an output device for providing an output by performing a machine-learning operation on the at least one feature of the plurality of features, which is selected from among an operation of ranking of at least one feature, an operation of classifying the at least one feature.
[022] In accordance with another embodiment of the present invention, the processing unit is configured for processing an operation of predicting the at least one feature, and an operation of clustering the said feature, and further, the output comprises a prediction of a volume setting and/or a mute setting of the mobile platform and sending the output.
[023] In accordance with another embodiment of the present invention, the machine learning interface is configured to have a program code to perform intra-object pruning operation including identifying a set of matching keypoint descriptors for a plurality of features descriptors in each object and removing one or more of the matching keypoint descriptors within each set of matching keypoint descriptors.
[024] In accordance with another embodiment of the present invention, the program code to associate remaining keypoints with an object identifier and further, program code to store the associated remaining keypoints and object identifier in a real-time database.
[025] In accordance with another embodiment of the present invention, the processing unit is further configured to have a comparator for comparing the specific feature with the at least one selected feature location wise; and activation means for activating the at least one feature depending upon whether the specific feature of the mobile station is within a predetermined distance of the at least one selected activation location.
[026] Further, the exemplary computer system for implementing various embodiments consistent with the present disclosure, which may be used for implementing a system for providing native machine learning service for user adaptation on a mobile application and portable computing device and method thereof. Computer system may comprise a central processing unit (“CPU” or “processor”). Processor may comprise at least one data processor for executing program components for executing user or system-generated requests. A user may include a person, a person using a device such as those included in this disclosure, or such a device itself. The processor may include specialized processing units such as integrated system (bus) controllers, memory management control units, floating point units, graphics processing units, digital signal processing units, etc. The processor may include a microprocessor, such as AMD Athlon, Duron or Opteron, ARM’s application, embedded or secure processors, IBM PowerPC, Intel’s Core, Itanium, Xeon, Celeron or other line of processors, etc. The processor may be implemented using mainframe, distributed processor, multi-core, parallel, grid, or other architectures. Some embodiments may utilize embedded technologies like application-specific integrated circuits (ASICs), digital signal processors (DSPs), Field Programmable Gate Arrays (FPGAs), etc.
[027] Processor may be disposed in communication with one or more input/output (I/O) devices via I/O interfaces. The I/O interfaces may employ communication protocols/methods such as, without limitation, audio, analog, digital, monoaural, RCA, stereo, IEEE-1394, serial bus, universal serial bus (USB), infrared, PS/2, BNC, coaxial, component, composite, digital visual interface (DVI), high-definition multimedia interface (HDMI), RF antennas, S-Video, VGA, IEEE 802.n /b/g/n/x, Bluetooth, cellular (e.g., code-division multiple access (CDMA), high-speed packet access (HSPA+), global system for mobile communications (GSM), long-term evolution (LTE), WiMax, or the like), etc.
[028] In some embodiments, the processor may be disposed in communication with one or more memory devices (e.g., RAM, ROM, etc.) via a storage interface. The storage interface may connect to memory devices including, without limitation, memory drives, removable disc drives, etc., employing connection protocols such as serial advanced technology attachment (SATA), integrated drive electronics (IDE), IEEE-1394, universal serial bus (USB), fiber channel, small computer systems interface (SCSI), etc. The memory drives may further include a drum, magnetic disc drive, magneto-optical drive, optical drive, redundant array of independent discs (RAID), solid-state memory devices, solid-state drives, etc. The memory devices may store a collection of program or database components, including, without limitation, an operating system, user interface application, web browser, mail server, mail client, user/application data (e.g., any data variables or data records discussed in this disclosure), etc. The operating system may facilitate resource management and operation of the computer system. Examples of operating systems include, without limitation, Apple Macintosh OS X, Unix, Unix-like system distributions (e.g., Berkeley Software Distribution (BSD), FreeBSD, NetBSD, OpenBSD, etc.), Linux distributions (e.g., Red Hat, Ubuntu, Kubuntu, etc.), IBM OS/2, Microsoft Windows, Apple iOS, Google Android, Blackberry OS, or the like.
[029] The word “module,” “model” “algorithms” and the like as used herein, refers to logic embodied in hardware or firmware, or to a collection of software instructions, written in a programming language, such as, for example, Java, C, Python or assembly. One or more software instructions in the modules may be embedded in firmware, such as an EPROM. It will be appreciated that modules may comprised connected logic units, such as gates and flip-flops, and may comprise programmable units, such as programmable gate arrays or processors. The modules described herein may be implemented as either software and/or hardware modules and may be stored in any type of computer-readable medium or other computer storage device. Further, in various embodiments, the processor is one of, but not limited to, a general-purpose processor, an application specific integrated circuit (ASIC) and a field-programmable gate array (FPGA) processor. Furthermore, the data repository may be a cloud-based storage or a hard disk drive (HDD), Solid state drive (SSD), flash drive, ROM or any other data storage means.
[030] The above-mentioned invention is provided with the preciseness in its real-world applications, the present invention pertains to provide a system for providing native machine learning service for user adaptation on a mobile application and portable computing device and method thereof. For example, by providing the machine learning interface having an adaptation service on the basis of past learning and trained data to permit use of machine learning and adaptation service as a toolkit of machine-learning techniques.
[031] It is to be understood that the above description is intended to be illustrative, and not restrictive. For example, the above-discussed embodiments may be used in combination with each other. Many other embodiments will be apparent to those of skill in the art upon reviewing the above description.
[032] The benefits and advantages which may be provided by the present invention have been described above with regard to specific embodiments. These benefits and advantages, and any elements or limitations that may cause them to occur or to become more pronounced are not to be construed as critical, required, or essential features of any or all of the embodiments.
[033] While the present invention has been described with reference to particular embodiments, it should be understood that the embodiments are illustrative and that the scope of the invention is not limited to these embodiments. Many variations, modifications, additions and improvements to the embodiments described above are possible. It is contemplated that these variations, modifications, additions and improvements fall within the scope of the invention.
We Claim:
1. A system for providing native machine learning service for user adaptation on a mobile application and portable computing device, comprising:
one or more processing units; and
a non-transitory computer-readable storage medium configured to store instructions that, when executed by the processor, cause the mobile platform to perform functions comprising:
an input device for receiving data related to a plurality of features wherein the received data comprises data related to called parties of mobile calls originated by the mobile platform,
wherein the processing unit is configured for determining at least one feature in the plurality of features based on the received data.
2. The system as claimed in claim 1, wherein the processing is configured for determining by the computation server, when to classify the feature data into one of categories and labels from a predefined set, using a measure of certainty, by performing certainty calculations at a plurality of time instances during the conversation.
3. The system as claimed in claim 1, wherein an output device for providing an output by performing a machine-learning operation on the at least one feature of the plurality of features, which is selected from among an operation of ranking of at least one feature, an operation of classifying the at least one feature.
4. The system as claimed in claim 1, wherein the processing unit is configured for processing an operation of predicting the at least one feature, and an operation of clustering the said feature, and further, the output comprises a prediction of a volume setting and/or a mute setting of the mobile platform and sending the output.
5. The system as claimed in claim 1, wherein the machine learning interface is configured to have a program code to perform intra-object pruning operation including identifying a set of matching keypoint descriptors for a plurality of features descriptors in each object and removing one or more of the matching keypoint descriptors within each set of matching keypoint descriptors.
6. The system as claimed in claim 1, wherein the program code to associate remaining keypoints with an object identifier and further, program code to store the associated remaining keypoints and object identifier in a real-time database.
7. The system as claimed in claim 1, wherein the processing unit is further configured to have a comparator for comparing the specific feature with the at least one selected feature location wise; and activation means for activating the at least one feature depending upon whether the specific feature of the mobile station is within a predetermined distance of the at least one selected activation location.
8. The system as claimed in claim 1, wherein the machine learning interface is configured to provide an adaptation service on the basis of past learning and trained data to permit use of machine learning and adaptation service as a toolkit of machine-learning techniques.
9. The system as claimed in claim 1, wherein the machine learning interface is configured to provide machine learning and adaptation service API, machine learning and adaptation engine, data aggregation and representation engine, service manager, and machine learning and adaptation service network support.
Patent Documents |
7 Documents Available |